Hacker News new | past | comments | ask | show | jobs | submit login
Will Kubernetes Collapse Under the Weight of Its Complexity? (influxdata.com)
205 points by tim_sw on May 28, 2018 | hide | past | favorite | 209 comments



This whole image, to me, represents a big problem with software engineering today: https://twitter.com/dankohn1/status/989956137603747840

The industry is full of engineers who are experts in weirdly named "technologies" (which are really just products and libraries) but have no idea how the actual technologies (e.g. TCP/IP, file systems, memory hierarchy etc.) work. I don't know what to think when I meet engineers who know how to setup an ELB on AWS but don't quite understand what a socket is...


The overemphasis on products is a real problem because it makes the industry so susceptible to marketing snake-oil.

Who would trust a doctor who proclaims his expertise in terms of brand names: "I've got ten years of experience in Cipro and Amoxil. Lately I've been prescribing Zithromax too! It's exciting to write that big Z. I really like Pfizer's community, their Slack is so helpful."


Basically yes, if employers/customers only look for products and not general experience and signs of being a good crafts(wo)man it’s not useful.

On the other hand, someone who already worked with 10 Languages, 7 webframeworks and 5 deployment/config management tools plus showing good craftmanship has enough experience to learn one more of these quickly. Only for a short term gig, the learning/being productive relation could be disproportional.

I‘ve experienced it being habdled rather the latter than the former way and don’t remember not getting a job for having not yet worked with one of these tools, but bei g chosen for experience and quality work.


We trust surgeons who proclaim expertise in specific named procedures.

Whether we like it or not, software engineering is becoming a trade. Trades aren't inherently low skill or high skill.


Who are also heavily trained in the physiology of the organs they operate on, long before they're allowed anywhere near a scalpel.

What you say about trades is true, but then we need to start making the distinction between software engineers and software technicians. I doubt that will go down well in an industry where everyone wants to be a senior software engineer, a tech lead or higher...


Interesting, in Mexico we do have that difference: We have got an "Informatics Technician" ( http://cecytev.edu.mx/wp-content/uploads/2012/03/Informatica... ) and Informatics Licenciate ( http://www.uabc.mx/formacionbasica/FichasPE/Lic_en_Informati... ) with all the flavours (Software Engineer, CompSci, etc).

I only wish that Informatics Technicial would be more prepared. And that Software Engineers/Licenciates would really be worth their salt.


I'm thinking the physiology of the organs might be less complicated than understanding a modern applications at all levels of the stack.

Maybe that would be understanding the organs, the underlying physical processes those organs regulate, the chemicals that are released as parts of those processes, and the effects and composition of those chemicals, and if it is possible to induce similar effects through replacement chemicals in the way of medication.


I'd say the complexity is quite similar, but that technology is more well defined then biology. Each area can still be segmented into to layers of understanding. For instance I have very little experience with machine level programming, but I have a general idea abiut how the disk, ram, cpu and cache are connected. Enough information to design fast programs, but not enough to re engineer that layer myself. Above that layer I am well experienced all the way to configuring high level software packages. Below that layer, circuit design maybe, I am basically in the dark.

Biology is not much different, though there schooling seems to do a better job at exposing the student to all the layers we know about.


I have a mixed feeling about this.

On one hand I definitely agree that it's good to know lower level technologies and that's something I always ask people in the interviews. I think it's important because I know it.

On the other hand there is no end on how low you can go in the technology stack. Do you need to know how sockets works underneath? Low level network protocols? Do you need to know how hardware works because it probably influenced decisions that were made on the low level of the software stack? I know what socket is, but maybe the only reason I know it is because I'm old and I started coding at the time when raw sockets was the only way to implement network communication. When we have modern libraries and frameworks do we really need low level knowledge?


Being able to go all the way down the software stack makes it much, much easier to keep said stack honest. Debugging is often a desperate attempt at establishing ground truth. Without a low-level understanding, you're always a the mercy of your tools, and how they have decided to curate the information they're feeding you. Even the most well-meaning curation can be so frustratingly deceiving as to incite violence, and god help you if something you're interacting with has a smug sense of knowing what's best for you.

There have been a number of cases where I've had to rely on the ability to debug binaries directly at -O3, or to resort to Wireshark to get the dose of reality needed to challenge my [our] flawed mental model. If I hadn't been able to do those things, I'd probably still be there, pondering those defects, if not in body [due to declaring outright failure], at least in spirit.


> you're always a the mercy of your tools

More than tools they are products nowadays :-(


You can make sense of all the back and forth in k8s networking? That needs more than package capturing I suppose. How do you translate a million packages into something useful?


most of k8s networking is actually not part of k8s? They don't really have a integrated overlay network except for kube-proxy. (which is lackluster).

If you are running k8s, you should run a proper overlay network first.


"Running k8s" means also running a CNI of course, otherwise k8s doesn't work.


"When we have modern libraries and frameworks"

I would question the assumption of having "modern" libraries, or at least rephrase it in terms of "performance" and "reliability":

Do we really have performant libraries (do they at least saturate the hardware)?

When it comes to networking, I think the answer is going to be more and more, no, we do not have performant libraries and software is being outpaced by hardware. Hardware is sitting idle thanks to our libraries.

Do we really have reliable libraries?

When it comes to storage, again I think the answer is no. For example, Linux RAID has long suffered from a plethora of faulty logic (pick a random disk sector as the canonical sector, split-brain etc.) and of course Linux RAID is not at the right layer to even have a chance of getting this right. And yet even ZFS lacks algorithms to detect or prevent split-brain. It's possible to detect (with 2 disks) and possible to prevent (with 3 or more disks), by taking quorum from the longest partially ordered set in the topology, but even ZFS (which is still awesome) is not reliable in this regard.

When it comes to networking, even TCP is not that reliable but has huge issues with bufferbloat. For example, a single person uploading a Gmail attachment on a 10mbps line will break the Internet for a second or two for everyone at your office (try it sometime - get everyone running "ping" and then upload an attachment). This is why Apple's background software update uses LEDBAT. Most libraries are not using LEDBAT or similar but are trusting TCP's congestion algorithms to do the right thing.


My personal advice in this cases is: try to be familiar with (at least) the abstraction layers below and above you.

Are you a storage expert? Do not ignore how storage hardware is designed (even on a historical perspective) and frequently think about the system calls it is your responsibility to implement.

Are you developing a sidecar proxy? Be fluent in low level network programming, and be aware of the interface you are offering to the application.

Are you a manager? Focus on your own manager and objectives, but do not forget to empathically look at your team members.


There are specialist subfields of programming nowadays: frontend, mobile app, embedded/microcontroller.

And a frontend sitebuilder doesen't have to know anything about transistors, FinFET, system calls, and only the most curious would know about reflow and painting calls in a browser's rendering engine. Sure, the quest for performance always pushes people toward the metal, but from a website's perspective that's just the browser, and maybe TCP window tuning. (And sometimes someone finds that DNS was slow, but sometimes it was a bad BGP constellation, and there was nothing to do really. Or maybe the load balancer - or the backend - was overloaded because there was mutex contention in the kernel - or in the DB behind the backend, but that's again rather far from the usual sphere of investigation for "Frontend Engineers".)

The imporant point would be to be able to go deeper, if there's a need. Or go higher, more abstract. That's engineering (or software design), recognizing, analyzing, and solving problems.

I can't really blame the people who get by just knowing parts of AWS. They make great money by being a Cloud Consultant. It simply shows the low hanging fruits on the market. And how bad the demand side is when it comes to picking the experts.


> And how bad the demand side is when it comes to picking the experts.

You meant, there is a very strong demand ?


I mean big companies that hire "AWS consultants" are big, slow, dumb, incompetent blobs of IT.

And yes, there's a demand for these specific skills, because big inefficient companies are willing to pay for just this, because their in house people are usually overworked or simply not allowed to do it themselves without an expert on site (or on call, or on skype).

And of course AWS is a big jungle of buzzwords itself, without clear documentation (and that's of course very important for them, they don't want to give away trade secrets, nor it should matter for the guarantees they offer, but their own custom nomenclature and sort of arbitrary abstractions over the standard Xen/KVM/qemu, VLAN, VXLAN, etc stuff makes it very hard to know exactly what's going on), so no surprise that there's demand for this. But as others said, sometimes these consultants have rather narrow skills.

And to add a bit more to this, generalists are few, and always quickly disappear from the market, so the holes/niches are plugged by whoever happens to be around.

Finally, I don't exactly blame these corps, it's just what makes sense nowadays. Need for extremely short turn around, instant risk-elimination, ASAP ROI, all point to "experts" and "consultants". No point in getting someone who understands the full stack top-to-bottom, when the project is in the initial phase and you just need EC2 and a DB on RDS.


Yeah, thanks for clarification. As a technical person I find myself a bit confused as I don't know if I should follow some buzz and try to make money in the short term of go deep and go for the long term. Will it pay off ?

Of course I get more satisfaction in "feeling" a "deep" and "true" technical folk.


> money in the short term [or] go deep and go for the long term

I think these are never contradictory, at least I think software jobs allow for the flexibility to keep working on high-yield things while learning something for depth on the side. (Or vice-versa, if you live off your side job, and do/learn something deeper/longterm.)

Especially with network, virtualization, security, distributed systems (DB, storage), etc. the ideas that are trendy now are spin-offs of older things. (Docker is basically LXC with a sane UI/UX, which is just chroot done better, a'la BSD jails, which is just Solaris zones, which is just IBM mainframe partition shit. Similarly everything on the network, like overlay networking, VXLAN, Geneve, OpenFlow, etc. are just natural layering abstractions and applied trade offs that were considered silly a decade ago, or were simply not needed a decade ago. Now that north-south traffic is dwarfed by east-west traffic in every system [this means that north is the client, south is your "DB", and east-west just means services in your system, and since people nowadays use a lot more complicated systems, the inter-component = intra-system communication is huge, compared to the few MBs sent to the client].)


Yeah, same pattern applies so many times, Slack -> IRC + WEB and many others.

>Now that north-south traffic is dwarfed by east-west >traffic in every system [this means that north is the >client, south is your "DB", and east-west just means >services in your system, and since people nowadays use a >lot more complicated systems, the inter-component = >intra-system communication is huge, compared to the few >MBs sent to the client].)

Never thought about it, in those terms.


I agree that it's not necessary, but:

Sockets are generally accepted as the standard. If you want to stream data to another computer, that's pretty much the only way to do it. The solution has been decided already.

A lot of these frameworks live in a space that's still evolving. We, as a community of developers, haven't agreed on the 1 best way to do it. As such, there are multiple offerings. In these cases, I feel it is definitely necessary to now what's underneath, in order to better evaluate ones options.


> On the other hand there is no end on how low you can go in the technology stack.

Not completely true. I think it's possible to have a complete simplified image of a computer in ones head while working on the newest version of your web app, going into depth in this area where problems arise.

And usually it's the other way around than you might think. Sockets you learn once and can probably even apply in 40 years. The current hip web framework you can't even use anymore in the same company in 2-5 years.


Why not go all the way down to device physics? It does harm.


I'm late to this party, but I agree. We hired someone who was explicitly a "Kubernetes" expert, because we were investigating Kubernetes.

What we discovered is that this meant they understood kubectl pretty well, and could write the yaml files. He didn't understand anything under the hood, so we had to re-discover everything.

And, ultimately, if you want to get a feel for how complex k8s really is, try and use their documentation beyond the very basics. It's contradictory, incomplete, and behind by a version of two sometimes. It was fun building a set of automated interactions with kubectl and its cousins from the web documentation, only to find it doesn't work because the actual binaries expect different arguments.

Or the API libraries if you're feeling particularly brave: a set of these horribly complex, dynamically generated files. Oh, some of them will be alpha, others beta, others beta2, etc. The version depends on the function you're using. Have fun!


Yes, there should be a baseline of understanding. It's unpopular to say but this field has too much complaining about things not being "easy enough" because of a lack of understanding concepts and architecture with too much focus on products, like you describe.

Would you complain that you can't be a doctor without significant training or a lawyer without understanding the law? At some point, knowledge is required. Distributed applications require certain primitives and those can either be provided by Kubernetes or built from scratch, but they need to be understood to be used. There's no shortcut to that.


Not sure exactly what you're trying to say, but it seems like your second paragraph skillfully refutes the points you seem to be making in your first and third.


You need a real understanding of the problem and possible solutions to actually solve something. Tools (regardless of complexity) only help you solve it once you understand.

People tend to say Kubernetes is complicated because they don't understand the problems it's solving in the first place. If they did, it would be rather straightforward or they would recognize they don't need it in the first place.


Actually, my point was not that Kubernetes (and other "technologies" like it) is complicated, but that people tend to make it a be all and end all without actually understanding how it works under the hood. You're right about one thing: I've seen plenty of people proposing Kubernetes (or Docker Swarm or whatever) for simple systems that clearly don't need it.


I've seen plenty of people proposing Kubernetes (or Docker Swarm or whatever) for simple systems that clearly don't need it.

That may be deliberate resume-padding. Same as most Hadoop implementations.


We're saying the same thing. It's not magical/complicated/etc. It's a set of tools, that work great if you understand the underlying concepts and when to apply them.


> for simple systems that clearly don't need it.

unless you have a single server with no availability your system is probably not "simple".


We do have a hundred or so servers with multiple redundancies, but we've done okay without Kubernetes...


* what do you use to scale your databases

* what do you use for master election

* what do you use to deploy your stuff

* what do you use on the tcp level for HA (keepalived?)

* what do you use to keep your system up to date

these would be my first questions about your system. and if you would use k8s all of them could be answered with "kubernetes"


Could you link to the k8s docs that does each of those?

Not trying to be snide, but so far I haven't found k8s to be much more than primitives with which to build those things.

I've also only begun working with it recently, so I do not have much experience, and any guidance would be much appreciated.


well you actually should be a little bit more familiar with all the concepts of the system to do the stuff.

what do you use to scale your databases:

databases can be scaled with statefulsets there is a good library that helps with that: github.com/zalando/spilo it is actually way easier than scaling over vms (especially on bare metal) since 1.10 you can even use block storage with local storage, so no need for a storage system. so basically k8s is just a good layer to scale such things, before we used patroni without k8s and it is way harder to actually do service discovery (more on that later)

what do you use for master election:

master election can be done with configmap or endpoints (actually github.com/zalando/spilo uses github.com/zalando/patroni for running postgresql and does master election over endpoints or configmap annotations) actually etcd also exists when you use k8s (actually it's the hardest thing to install when you deploy k8s)

what do you use to deploy your stuff:

well deployment/apps are extremly easy to use and do zero downtime deployments when you just change your image version

what do you use on the tcp level for HA (keepalived?): some folks at google created metallb: https://metallb.universe.tf/ it's actually a load balancer on top of k8s that is so simple to use (just run https://metallb.universe.tf/tutorial/layer2/ and it will work)

what do you use to keep your system up to date: well that's more tricky however ContainerLinux (formerly CoreOS) (not related to k8s) works very well for keeping the os up to date. you can even use https://github.com/coreos/container-linux-update-operator to only update one node at the time in a k8s env. however it also works without k8s, but you still need to run mostly container software on ContainerLinux.

If you are new to k8s and want to know more about "easy" to use on bare-metal stuff you should look into ContainerLinux ignition and kubeadm, it's really simple to get up and running from scratch, it's mostly just creating a nice cloud-config and run kubeadm once on the nodes you wanna install.


Thank you!


K8s is a great system manager. A natural evolution of config management and containerization.

And it's good for deploying big systems with lots of moving parts (such a OpenStack, or any proprietary system, let's say a big e-commerce site).

And even if the problem is straightforward (just run these docker containers, just do this rolling upgrade, just do setup these nginx load balancers, just add these to monitoring), all of these are a chore, especially if done right. And config managers always fail at the premise of apply changes to a "known state" (or they spend an eternity checking the state), so why not fuse actual monitoring with config management, and a lot of nice concepts fall out automatically, like Deployments. No longer do you have to disable the alerting part of monitoring, or mark certain parts of the telemetry time series as irrelevant for SLA accounting because we were deploying (or cobble together scripts to handle this for you), and then evolve the whole system as the underlying components (the things that are being deployed) change.

Sure, k8s is not there yet, as the problem is big, complex, and hard to get right even in parts, but a lot better already than the simple one-man-army homegrown solutions. (IMHO)


You referenced my tweet of the latest Cloud Native Landscape, which is the backside of our handouts.

Please don't miss the Cloud Native Trail Map, which is the front side: https://twitter.com/dankohn1/status/999727252135972864


You are soo right ! There is a shift going on between "operators" and "engineer", many "software engineering" jobs mainly configure and operate on the top of somebody's else product.

They do write code but the technical job, per-se, is not that skilled.


I expect people said this about developers that just knew about TCP/IP and not how the PHY worked.

At every level of abstraction they is a boundary that people may not choose (or have time) to pass.

I'm not saying everyone should be forced to learn about every layer, but these knowledge boundaries exist everywhere, and I'm not sure what can be done to improve the situation...


There's no problem with not knowing the deeper/higher layers. The problem is when someone outright ignores their ignorance, when someone doesn't even acknowledge their boundaries. And when those boundaries are set in stone. ("I won't touch the terminal." or "I won't touch C." or "I won't touch CSS.")


As someone that has stared at a number of those projects, at this point I'm pretty excited that there are a number of different alternatives which tweak different knobs in the solution space of the problem. I think in the next few years you'll see a winnowing of choices in each category, and problem bundling of the top tier choices into an effective PaaS solution.

Kube is very clearly in the expansion phase of its lifecycle but I doubt it will stay this way over the long run.

Additionally, I think the fact that the overall categories are pretty well chosen is a sign of how decently understood the macro problems are and that there's a good reason to hope that you'll end in a place with 1-2 dominant players in each.


Before rdbms became a commodity, companies would build them in house and required very specialized skills.

Then it became a commodity and now you can just use postgresql for a basic web app without knowing much about dbs.

Start adding scale and availability requirements in the mix, and you're back in need of specialized knowledge on indices, query planning, disk access patterns and db admin in general.

Same for cloud providers imho: you can throw an app online quicker than ever before, but as things get more complex you will still need an understanding of infrastructure, network, availability, security etc.


Did you catch some of the comments yesterday (in the thread about Intel and age discrimination) poo-pooing the relevance of experience in software engineering? There is your answer, right there.


"Full-stackoverflow" engineers


I don't know man ppl work to make money and feed and their family( mostly). 'Expert in sockets' is not something on the resume that would get them a job.


That you both have your domains of knowledge? I mean should the electrical engineer worry that you know what a socket is but not how it actually travels over the wire?


Actually, the correct analogy here is an electrical engineer who knows the the General Electric Foobar technology and can use its console to design and build an electrical system, while another engineer knows the Alacatel FizzBuzz technology and believes its superior because it can build larger systems. Neither truly understands how resistance or current actually works, because they're all abstracted away under system specific terminologies.


Manchester encoding? (assuming twisted pair ethernet, at least last time I looked into this stuff). And no, while I don't expect a developer to be able to build a NIC from first principles, I do expect someone writing a distributed application to at least know the difference between TCP and UDP. (cause that's actually HIGHLY relevant).


Looking distractly it seems one of those "hot new javascript frameworks of the month" infographics.


I don't know what to think when I meet engineers who know how to setup an ELB on AWS but don't quite understand what a socket is...

I keep meeting people who list say Postgres as a skill and when probed they admit they just clicked a button in Amazon RDS and that’s all they know about it. At some point it crosses the line into outright deception. This is the other side of the coin on why interviewing is so broken too.


I'd say this goes both ways. You have companies asking for the impossible that won't interview you otherwise, this will create an incentive to deceive. I'd say HR is at fault here for asking for 10 years experience in everything, instead of just looking for software engineers that have good fundamentals and can learn.

I once applied for a job writing tools for embedded programmers that said they want experience in Angular, React, SPAs whatever. I'm working in embedded systems, so obviously 99.9% of people haven't even touched web apps, never mind the modern technologies. I actually got to the interview (despite no deception), because I knew someone working there. They said they actually don't have a single web app and are doing everything in C# (on desktop), and were thinking about making some web apps.

They still don't have any web apps.


I am pretty OK with a little subjective exaggeration, like if someone is at least competent/proficient and describes themselves as “expert”... well, who’s to say? But spending literally 2 minutes clicking a GUI then adding another skill to your CV is breathtaking audacity.


Microsoft Word is also incredibly complex software with decades of development and features, and yet it's just a word processor. Everyone uses a small subset of the actual functionality which is why the entire system can be complex and simple at the same time, depending on your needs.

It's exactly the same with Kubernetes. It's just clustering software that ties multiple servers together to give you an PaaS-like workspace to run containers, but there are 1000s of details you can use if you need them to build much more intelligence into operations. If you don't need them, you can just run a single container and still benefit from simple declarative deployments and automated oversight.


But it’s trivial to start typing in Word. Spell checking is easy, as are basic formatting operations. Loading/saving work the way you’d expect.

Yes you can write a dissertation with a ton of support from Word to make you life easier, but doing simple things is simple.

It sounds like that’s what missing from kubectl. Even for a small start it takes a lot of knowledge.

To continue the Word analogy, that sounds like LaTeX. It’s very powerful, but no normal person is going to get started for the first time very fast for basic tasks. Certainly not compared to Word.

The Rails analogy from the article seems very apt.


Doing simple things in Kubernetes is also simple. It's 1 line to get a pod up and running:

  kubectl run appname --image=yourimage
You can then graduate to a basic YAML config file with a few lines and update with:

  kubectl apply -f yourfile.yaml
At some point, if you want to build a distributed application, you need to know the concepts involved. Looking at it relatively, you needed to learn and deal with many more low-level details before Kubernetes existed, so it's actually quite an improvement to what we had just a few years ago.


You skip over setting up Kubernetes. And keeping it running.


A distributed system will never be as intuitive as MS Word, which is basically the digitized version of stone tables, or papyrus, or whatever thousands of years old thing we are already familiar with.

Or just remain a bit more strict and look at typewriters. Word is a typewriter. People look at typewriters and sort of understand them after pushing one-two buttons. Yes, there are hidden gems and details and a long road from mechanical typewriters to Word and laser printers (e.g. https://www.youtube.com/watch?v=bRCNenhcvpw ), but people are a lot more intuitive about these pieces of hardware and software, because they are simply updates on existing tech.

Distributed systems is a new, and hitherto an explicitly unknown concept for many people.

And with minikube you can set up k8s as easily as MS Office. (Or you can use Typhoon https://typhoon.psdn.io .. )

And it's trivial to package up all this into a website, like Office 365 and GKE https://cloud.google.com/kubernetes-engine/ .

Or you can run it on your own PC with minikube and Virtualbox/KVM/qemu, and similarly you can install Office locally. But it does not make sense for one and it increasingly makes less for the other too.


If we're talking about the actual installation of a distributed system, then installing a distributed database isn't "easy" either, and also requires knowing the concepts so you know what you're doing.

Kubernetes is not going to be simpler or easier than the software that it's designed to run on top of itself, but it's not that hard anymore either. The installers work well, there are several distros with varying capabilities, or you can just use the public clouds like most where it's 1 button.


I think this is a totally flawed way of thinking. Most systems do not need a fully distributed database, for example, and as a result most don't need the flexibility.

E.g. over years of using Etcd. I've come to the conclusion that none of the uses I've actually used it for were necessary, and so I've generally stopped using it. I'll use it again if I come across instances where consistency is sufficiently critical and the system needs to be distributed, but that's a rare situation to be in. E.g. I've seen lots of systems try to use it to distribute configuration data. Most of the time this doesn't need to be consistent, as long as you can determine whether or not it is (and so take outdated instances out of rotation, or not put them back in rotation). Same for load balancer configs, for example.

It's gone to the point where whenever someone mentions "distributed database" alarm bells goes off in my head. 9 out of 10 times it's a sign of someone over-engineering a system and building in unnecessary complexity.

What I want out of a system to manage my containers is something that reduces complexity of deployment and operation, not something that introduces more complexity. So far I've seen nothing of Kubernetes that indicates it fits that bill. Maybe it does on the very high end. My largest systems have "only" been in the few hundred containers/VM range over dozens of servers in 4-5 physical locations (on premises/co-location, managed and public cloud). For that scale, I've personally found Kubernetes overkill and adding too much complexity.

Maybe that'll change some day as it matures, but I'm not in a rush to complicate my stacks.


This is far off the topic and I'm not sure what point you're making, or what you refer to as a way of thinking.

My main point in all of this is that concepts must be understood to run large distributed applications. If you don't have such an app, nothing applies. If you do, then you have a choice of tools to use, and one of them is Kubernetes which offers a great foundation and set of features. There are other tools like Nomad, Chef or your own code and scripts - it doesn't matter what you use as long as you understand what you're doing.


So what do you use instead of Kubernetes? Docker Swarm? Manually allocating containers on specific machines? Something else?


Here is a thought experiment getting to the other person’s post:

You have 1000 machines. Each runs 1000 containers with an average runtime of 1 hour per container.

Design a scheduler that runs on a single machine with a cold spare. For failover, assume you have access to a reliable RDBMS or relible NFS server.


Compare to setting up a Lambda application with a DynamoDB or Aurora database. Very simple, has limitations but you get a scalable distributed system for (almost) no time investment.


You can buy hosted k8s, one click, and your cluster is ready.

https://cloud.google.com/kubernetes-engine/


But that’s expensive.

Plugging in numbers for my $1000 synology, which runs a half dozen dockers, a vm, and is off-site backed up (for $120/yr), it tells me that I’ll pay $135/month to run comparable kubernetes at google.

Even if you pretend my $75/month broadband connection is only used by the synology, and include power costs, the synology still wins by $10’s/month.

(I included 3TB of storage for kubernetes, and 6TB usable for synology, with 3TB used, and sized for the high-memory 4 core machine, since dram is the synology bottleneck, even with a low wattage cpu. The synology is intentionally over-sized and is basically idle, so “but scalability!” isn’t a valid complaint.)

Also, the 9’s I’ve observed were trouncing amazon for a while, though they may be catching up, thanks to comcast...


Yeah, it is. But that wasn't the question. MS Office SaaS or Adobe Creative Cloud is expensive too. AWS/GCloud is already quite pricey as it is, and every cloudified thing they roll out is just a money pump for people who don't know better.

But if you want to roll your own, then you can, it's FOSS after all. kubeadm works very well.

Comparing Synology with anything Google is silly, but useful. You can rent dedicated machines easily. Leaseweb is nice. And then you can build your infrastructure for cheaper.


What is your point? You just randomly compared several different technologies.

Yes, different things are different levels of complexity, and using a managed service means paying someone else to handle it.


well, this is mainly because the hard part is abstracted away from you by amazon.


K8s changed the way I thought software should be built, because it presents the data centre as an API.

K8s is also not that complicated after you grok it. It's also pretty easy to get started for simple cases. It's just that you need to switch your mindset a bit; but the benefits from the mental switch are non-negligble.


> K8s is also not that complicated after you grok it.

Writing (or code-generating) a bunch of YAML configs is the easiest part about K8s. Diagnosing why in the world it misbehaves in some weird manner is where the real complexity is.

Of course, this does not apply when you're not a system administrator, but just an user of someone's else (Google) K8s.


https://cloud.google.com/kubernetes-engine/ .. easy as Office365.

With GitLab you can deploy to k8s directly. Like Word and let's say SharePoint.

Also, there are a lot of folks doing this commoditization and packaging up k8s in various ways to make it more usable for a wider audience.

You're comparing k8s to Word instead of Windows kernel. Or maybe instead of .NET.


It's true and I think that there's a base case for orchestration that is to autoscale nodes based CPU/memory consumption behind a load balancer that would get 90% of projects off the ground without needing to think about yaml files.


YES. THIS!!

I'm a single-person ops team for my startup. Granted, I MAY be an exceptional learner and not realize it, but I mostly consider myself an under-achieving stoner.

I chose Kubernetes as our platform a few years ago and it's been absolutely wonderful and only getting better. Every once in a while I do a thought experiment with my co-workers to ensure Kubernetes is still the product for us. It is. We all love it and it makes my job easier. And it's really not that complicated.

Let's go ahead and list a few things Kubernetes Provides at the cost of writing a few YAML files and provisioning some docker containers in our CI/CD stack:

- Service discovery and health checks

- Zero downtime upgrades with rollouts and rollbacks

- Horizontal auto-scaling ( VM AND container )

- Configuration management and secret storage

- Immutable infrastructure

- Automatic SSL provisioning and routing via kube-lego and GCE ingress controller

- Log aggregation and monitoring via Heapster

- Cloud-provider agnostic configurations

- A consistent and approachable API to implement required features onto.

- ( With Google GKE ) Automatic security updates with a set maintenance window.

- Easy deployments with Kubernetes Keel

There are more benefits but these are the ones that come off of the top of my head.

ALSO, it's an opinionated framework that makes doing things "The Right Way" easy and intuitive to do.

In my TINY organization's reckoning, K8 actually makes our lives EASIER rather than more complicated and is considered one of the best decisions we ever made. It gets out of our way and lets us work on the fun stuff.

> In the case of applications that simply don’t have the scale problem, they usually don’t need the added complexity of a self-healing, horizontally scalable system.

If you build on a swamp your castle is going to sink into the swamp. Don't obsess over scale, but plan to scale, even if it's just "We can throw more servers at it."

Self-healing? The author is correct: Probably not necessary.

But scaling goes both ways: Our start-up is counting pennies and saving money wherever we can. Having an auto-scaling pod and node cluster makes so much sense. Our cluster automatically scales up and down based on load, saving us thousands of dollars a month.

> The problem I see with Kubernetes is that the cognitive load in the early parts of a project are simply too high. The number of things you need to create and worry about are a barrier to starting a project and iterating on it quickly. In the early days of a project, feature velocity and iteration speed are the most important factors. I view the Heroku model as the ideal development model. You have a managed hosted Postgres database and you just git push to get new code deployed and out the door. There’s very little to think about. It may not scale to infinity and it may get expensive, but you can worry about those things once you’ve actually got a hit on your hands.

I disagree with the assertion that starting out is difficult ( Or, that it is more difficult than other solutions ), and heroku is EXPENSIVE. With GKE, you are paying for the VMs and little more and get a BUCKET-LOAD more features.

What is this nebulous "complicated part" of setting up kubernetes? You literally hit a button and it creates your cluster.

You can have an application ready to go with three objects: a service, an ingress, and a deployment. I find Heroku to be of a similar complexity with less flexibility.

I get EXCELLENT monitoring and log aggregation by running `helm install datadog` and providing my key. I get a good-enough rabbitmq cluster by calling `helm install rabbitmq`. I get automatically provisioned SSL by calling `helm install kube-lego.`

So I suppose my response would be: No, I don't think it will. I think the complexity is over stated.


> I think the complexity is over stated.

This is because you don't manage K8s and don't deal with this complexity. Google does.

Now, imagine a case where suddenly one pod can't ping another, but packets flow okay in reverse direction. The cost suddenly goes up from "writing a few YAML files" to "debugging CNI".

> What is this nebulous "complicated part" of setting up kubernetes? You literally hit a button and it creates your cluster.

That is, if you want to depend on other companies to provide your infrastructure. Which is perfectly reasonable business decision, but the opposite (self-managed bare-metal colocated servers) is also perfectly reasonable.


Obviously if you are spending thousands of dollars per month in infra then you are the target of K8s. The point of TFA is that most products fail before having to scale to that point.


It's not just scale, it's all the other helpful features which you get even with a tiny cluster.


When things go wrong in Word, there's no need (or potential) to dig into the underlying code for analysis.


...Ok? I don't see the point though.


I've gone to the last few KubeCons and given talks at two of them and I'd also consider myself to be more of an app developer than ops. The tone has been very much that Kubernetes is deeper in the stack than most developers want or need to be thinking about. Mantras like "kubectl is the new ssh" have become super popular. So Kubernetes ends up being the platform you build your tools developers deploy their applications with -- if you work on ops. The problem seems to be that there's not a lot of agreement on what those tools actually look like. What Kubernetes does end up doing is providing a consistent API to deploy workloads across (many, but not all) cloud providers. Over time we'll see better and better developer facing solutions built on top of Kubernetes, rather than part of Kubernetes.


The problem with saying "kubectl is the new ssh" is that it is simply not true in my opinion. Something more akin to "kubectl is to controlling a cluster as ssh is to controlling a server" would be more accurate I think. The point the OP makes about "Most Developers Don’t Have Google-Scale Problems" is true, I don't think you should use Kubernetes if your app consists of just a website and a database. But do people working on such (relatively simple) apps really consider using Kubernetes?


IMO, only if you're working on like 100+ of them. If you just maintain a website using a traditional LAMP/LEMP stack, Rails, Node.js, or something like that it still makes more sense (unless you want to be 'trendy') to stick to primitives or use more managed hosting.

But if you're maintaining a fleet of independent sites, Kubernetes' scheduling can make sense, despite the inherent complexity (TBH, you're going to have a similar level of complexity managing the same kind of scale with any other tool).


K8s also makes sense if you have more than a single server. it's not as easy to keep all your servers up to date, without some automation.


You don't need K8s to automate maintenance of a small number of servers.


Yes, as part of a terrible feedback loop and resume driven development.


If you're an app developer you shouldn't need kubectl. AppOps should be as easy as Heroku but based on Kubernetes so you have a choice of providers and a graduation path of you need more customization. With GitLab Auto DevOps we tried to provide exactly this. It will go GA June 22. It does more then just building deployment and auto scaling, it runs your unit tests, advises if your quality improved or not, and runs four security tests. All that with a git push.


Running OpenShift has felt like early Rails to me. You can get started with a hosted version, switch to on AWS easily and dive deep down into Kubernetes whenever you feel ready. It is also opinionated so it’s easy to get started on the golden path and modify it to suit your needs. The only real frustration has been upgrading clusters which has gotten easier each new release.


I’m new to Openshift (about a month in) after a year of low level (hand rolled HA cluster) kubernetes experience. I don’t rate the experience in Openshift. It seems like they are trying to tack things on which are superfluous to most teams requirements, loosely defined, and not well advertised.

I’m constantly trying to figure out what it’s hiding from the Kubernetes layer or what it is being manipulated to provide its behaviour.

I personally wouldn’t recommend Openshift->Kubernetes but the other way round would be a better approach once you know you need the additional functionality.

(Edit: fix typo)


Kubernetes isn't supposed to be simple; it's supposed to be a box of tools that you pull from to represent literally any workload.

Once you know what tools to ignore, and build scripts around the ones you need, it's very powerful.

This line of thinking is like faulting the golang stdlib for having a lot of useful stuff in it.


Openstack is the same way. However, people want something simple by default. Do not underestimate how poisonous "too complex" can be as a label for a project.


While kubernetes has a lot of concepts, its deployment is a lot more straightforward than OpenStack: https://dague.net/2014/08/26/openstack-as-layers/


Everything should be as simple as possible. It's the mark of good design


As simple as possible, but no simpler.


Minikube—and by extension, basing all the starting tutorials off minikube—is approaching this ideal, IMHO. A year ago, the first time I tried it, it was frustrating to even get up and running. This year I was actually able to get some examples running locally... and that's progress :)


As someone who uses minikube every day to work on an aggregated apiserver, my impression is that minikube is incredibly fragile. I have to reset the VM more than a few times a week. Which isn't that bad considering getting back up and running is pushing one big yaml file down kubectl, but still. It could be much better.

Same with kubeadm. It's pretty okay for a test cluster, but it can't even do a HA setup out of the box. That's an absolute must-have if you have a project big and serious enough to warrant using kubernetes.


There's no single design that suits everyone.

It's a toolbox. The job of choosing the right tool is on you.


There’s a big difference between a toolbox and a box with those same tools jumbled up inside.

One is organized and sensible. The other is a mess.

Just because they are both technically capable of the same things doesn’t mean we shouldn’t hold ourselves to the higher standard.

(Haven’t used Kubernetes, just going with the analogy)


Kubernetes has some bad organization and some terrible naming due to it's rapid pace of development, but it's getting better and is much nicer than any other existing system that can provide the same functionality.

I think you'd be surprised how quickly you can understand > 80% of it with just an hour spent reading the docs.


Don't pile in on k8s criticism without familiarizing yourself, please.

IMO it's a bit like Java for the cloud, the write once run anywhere bit (same caveats, but the win is still there). It intermediates the differences between cloud platforms. That alone makes it strategically worthwhile even if it was overcomplicated. But I don't think it is. Its core control loop is simple in principle, and the concepts and extension points fall out pretty naturally from it. It is a well-designed system at core and can support improvements over time without accruing cruft indefinitely, for example. Part of that makes it look like a bag of decoupled tools. But that's a strength of the system in the longer term.


I explicitly said I hadn’t used k8.

I was simply commenting on the “it’s a complex thing so it had to be complex” line of reasoning. It’s faulty, you can make complex thing easier with good design.


Complexity doesn't go away, but it shifts and becomes a lot more manageable with the right tools. Clustering is inherently complex, k8s allows you to centralise, structure and manage it in a way that brings it to the masses, where before it was limited to large companies like Google, Amazon, Facebook, ... spending considerable amounts of engineering resources in large scale deployments.


> Kubernetes isn't supposed to be simple

Disagree, it's a tool designed to abstract away the complexity of deployment and operations, simplicity of use/management sound like should be a design goal.

It will probably get there but for now it's hard to disagree with the "complex" label.


By analogy (likely a bad one) Kubernetes is the C++ of automated infrastructure?


Exactly what I'm always saying. It's also nearly impossible in an Enterprise IT environment to get Kubernetes working on your laptop. Minikube and Docker Edge both seem to fail way too often.

As a developer one wants to spin up a system to work on, then work on it, then push results to some repo. And this loop simply isn't possible (yet?).

Also what the author didn't mention is that even the vanilla k8s stuff is already super complex. Let's assume you manage to setup a cluster somewhere and it continues to work for more than 2 days (rarely seen in real world work environments). Then you are faced with deploying your hello world app to work on. Just for a simple single-server nginx deployment with no files and no config, you already need to understand mutliple objects: deployments, replicasets, pods, containers, nodes, hosts, services, nodePorts, port-forwards, maybe even ingresses and controllers.

That means until you feel well, even in a perfect environment, you need several days or weeks. And the documentation is not really helping you there. Yes, it's better than most enterprise grade documentation out there, but still it assumes a lot of stuff upfront. For instance, why should a developer even know what an ingress is and that it might be something he needs?

Combine that huge learning overhead with nearly impossible network debugging, beta level stability, and the low possibility to "just use it", then you have a system that most developers will never touch.

Docker itself might survive though, and one of the good things of this CNCF world is that other alternatives to docker also get a chance to improve on the existing system.


> It's also nearly impossible in an Enterprise IT environment to get Kubernetes working on your laptop.

huh? What does that mean? Maybe that means the environment is broken, not the software you want to use/work on.

> For instance, why should a developer even know what an ingress is and that it might be something he needs?

Then probably that dev shouldn't work with k8s. At all.

If the dev wants/needs a hello world on a domain/IP, then they need a web hosting service provider. (ghost.org) Or a PaaS (Heroku), or they can spin up a VM on DigitalOcean and follow any of the thousand Ubuntu Nginx Website Hosting tutorials on HowToForge.

If they already have dozens of VMs, scores of containers, and they are fighting with monitoring and config management, then they might need k8s.

And a lot of folks do need this level of infrastructure and infrastructure management (automation, abstraction, standardization, etc).


> > It's also nearly impossible in an Enterprise IT environment to get Kubernetes working on your laptop.

> huh? What does that mean? Maybe that means the environment is broken, not the software you want to use/work on.

That is what the word "Enterprise IT" means. If you ever work in a company that makes more than a million USD per year, you will find an imperfect network in a nearly unknown state, with proxies and firewalls making your life hard, and automatic reconfiguration tools/scripts/antivirus-software resetting everything to "not working" the moment you take your eyes of the config files.

People and Software who really want to make money in Enterprise need to be able to handle that somehow. If you develop software on your macbook in an environment with the complexity of a Starbucks Wifi nobody can actually use your software in Enterprise.

Btw. did I mention that Windows+Outlook+Lync is the high standard of Enterprise laptops? Forget Enterprise users if you are not developing ON Windows.


... but ... but ... no one really cares about that. k8s is targeted at startups who will do the sales dance with the big Enterprises and they'll do a SaaS that's backed by k8s managed infra. (Or the Enterprise will use k8s on their Linux servers. Maybe hosted on VMware, maybe on HyperV, maybe in Azure maybe at AWS.)

k8s is a project, a lot of people find it useful. A lot of big corps have IT R&D groups (basically all Fortune 500 have), and they have their own test network. Or they test on their own rack, or on their own cloud, or on their own AWS account.

I think I don't really understand what your belief with regards to k8s is, but I'm interested, so could you give some details?


+1 on challenges in enterprise environments for local container-based development, this seems an unsolved problem for now

iterating on docker-compose might be easier/faster locally, then pushing to a dev k8s cluster for testing in ci/cd

curious if people are using k8s locally and why


it's been extremely easy with Docker for desktop. Just 1 click a way from "enable kubernetes"... in Docker preferences.


That worked for me for 2 days. Then a manager was breathing down my neck to debug his cluster. So due to stress I overwrote the kubeconfig file with my managers kubeconfig.

Now I can't find a way to get the docker kubeconfig back, resetting the cluster, reinstalling kubernetes etc does nothing, and now that I have uninstalled and reinstalled docker, kubernetes doesn't even come up anymore. I'm also not allowed to share debug information with the Docker company. No real help from Docker developers without debug info. Because they can't tell me how docker generates a kubeconfig or where it writes its logs without seeing my diagnostics output...

Not the vanilla perfect world use case is what makes your software but how well your software manages edge cases.


It's probably a bad idea to run a whole cluster on your laptop anyways. I never understood that practice. Just set up a server/mini-cluster and use that for development to host your ancillary services like RabbitMQ or whatever else you need. Then run the app you're actually developing, and only that, on your laptop.


The complexity of Kubernetes largely reflects the complexity of the problem. Nobody has delivered anything significantly simpler that hasn’t had a much smaller scope, and those tools approach the same level of complexity when composed with others to get the same level of functionality. But setting up Kubernetes is both well documented and automated on multiple cloud providers, whereas something like Nomad & Consul doesn’t really have a good end to end walk through to get you to the same destination as Kubernetes. I suppose if you’re fine with pushing that complexity into service clients you can avoid the need for a lot beyond what Nomad and Consul give you by themselves — but then you end up with the downsides that something like the Netflix microservices stack gets you. Fat clients ultimately leave the developer with more complexity, and Kubernetes helps you eliminate that in favor of more SRE/Ops/whatever-you’d-like-to-call-it complexity. Since cloud platforms can take a lot of the edge off of Ops complexity, that’s my preferred approach.


> The complexity of Kubernetes largely reflects the complexity of the problem.

I can solve most of these problems with a simple Linux + systemd in a much more readable way and am finished much quicker, even now that I am using k8s full time for 2 years.

So, no. What it represents is the complexity of the community having multiple sources of requirements, and the complexity of the landscape with everybody trying to make a claim with their name+logo without investing too much into a full stack answer.


How is systemd going to help you with service discovery, load balancing, autoscaling, automated rolling deployments, etc?

That being said, if you don’t need any of those things you probably don’t need Kubernetes either.


> service discovery

only used internally and not in a scaling fashion

> load balancing

External component that is hooked into k8s in some way

> autoscaling

External component that is hooked into k8s in some way

> automated rolling deployments

I doubt that many people are at a point yet where they can reliably do that. There are basic assumptions baked into that feature like that the k8s cluster survives until the next update comes out.


I would also argue that Kubernetes is less complex than it seems at first glance.

Yes, if you look at all the possible parts, and at the current monolithic codebase, there's a lot of complexity. It also supports umpteen cloud providers, volume providers, networking stacks, etc., and comes with a whole swathe of bootstrapping tools for various environments (e.g. AWS).

But if you strip it down, Kubernetes is "simple": There's a consistent object store made out of JSON structures, and then there's a bunch of controllers listening to changes to that store to make stuff real. That is the core. Everything is, in principle, controllers mediating between the data model and the real world. Very elegant and orthogonal.

You also have an API, a scheduler, and a thing called Kubelet that runs on each node to manage containers and report node-specific metrics. And of course you have Docker, though with 1.10 you can more easily run dockerless via containerd, which is a great thing indeed.

The complexity comes from the operational part, when the pieces come together. And as you say, there's not really any way around it.


It's certainly a lot easier to do "kubernetes the hard way" than it is to do "Linux from scratch", and I don't see a lot of FUD about how we should stop using Linux because it's too complex. That might seem like an extreme comparison, but I think the number & quality of comparable alternatives is similar between the two.


>Everything is, in principle, controllers mediating between the data model and the real world. Very elegant and orthogonal.

If you distill k8s down to this model alone, k8s becomes nothing but a pattern that has existed for decades. Maintaining "desired state" and "operational state" as separate things is not new.


You missed my point; I didn't say it's new, I said it was simpler than it might seem, and that thinking of it as a state machine makes it easier to understand what the core of Kubernetes really is.

And of course "nothing but a pattern" is nonsense. Pre-container systems like Puppet and Chef -- which are also, vaguely, based on converging real state towards desired state -- are firmly rooted in the traditional Unix model of mutable boxes. You can't implement a consistent reconciliation loop if your state can't be cleanly encapsulated (as with containers).


I'm a sysadmin and I attended KubeCon recently. I came back with a similar thought flow in mind. This one anecdote nailed the problem in my opinion - "Kubernetes makes simple things hard, and hard things possible." So, if you dont have things that you think are impossible, just don't pay the complexity tax.

Real-life example : https://www.reddit.com/r/devops/comments/8byasq/is_kubernete...

Paraphrasing for discussion: Poster: A rails project with deployed to 6 servers running in production currently.

Poster: During the asset compilation process, the servers often freeze.

Poster: I need to manually remove servers from the load balancer and deploy one by one.

Poster:I looked a lot into kubernetes and production containerization lately, and as far as I read it, it should solve the deployment and uptime issues. I imagine it'd be a lot easier to just switch containers instead of deploying with capistrano. I also really like the self-healing capabilities a lot.

So, he/she hopes that Kubernetes will magically solve his problem(asset compilation freezes the server). I suppose in his/her mind, Kubernetes is the snake oil.

Things that he/she failed to put thought into (and rather got revved up about Kubernetes):

* Could I setup CI with a script that will perform the asset compilation once on one server and just rsync the final result to the prod servers?

* Could I spend a couple of hours understanding the asset compilation process and find out why it freezes the server?

* Could I learn more about load balancing, rolling deploys?

I think this is the real problem in the tech field. People are running after shiny tools and hope to through tools at their problems all the while ignoring the basics.

In this particular case, I think if they had a grey beard sysadmin who was grumpy to the devs, and enforced a strict release process, everyone would've been happier.


I was at KubeCon, and had a similar experience. Lots of engineers excites about all the technical possibilities, and less discussion of developer productivity.

It reminds me a little of in the 00's everyone thought their company should write its own CMS. I think we're in danger of every writing their own PaaS.

This is why things like Deis and Cloud Foundry exist. Most app developers should not have to understand the full depth and breadth of Kubernetes.


Amen. Use PaaS as much as possible and fallback to running VMs as a last resort.


> kubectl scaffold mysql --generate

This exists, its called helm, which in fact delivers the productivity gains the author is looking for.


In the following paragraph, the author considers and appears to reject that solution:

>Maybe the combination of operators and Helm charts covers this, but I don’t think that will cut it. Then we’re forcing developers to learn about two other things in addition to Kubernetes. Even if it’s just increasing the vocabulary and installing a new command line tool, it’s extra effort and thought. These things need to be first-class citizens and part of the Kubernetes out-of-the-box experience. They need to be accessible via kubectl.


So, instead of keeping cross-cutting concerns separate and permitting them to evolve separately, the author wants K8S to become more complex, because they'd rather learn one more complex thing than multiple simpler things? Seems legit.

I mean, why should I install multiple libraries? Why can't glibc wash my car? Will glibc collapse under the weight of it's own complexity -- even though it can't wash my car like I want it to?


I think more charitable interpretation is that kuberbetes should be designed in such a way that these tools are a natural part of it. Think of active record being part of rails. Sure you can use rails without ar but that doesn't make sense.


Using Ruby on Rails without ActiveRecord does make sense for a number of applications. For example, an edge caching API which uses Redis on the backend. Helm is a package manager for K8S but it isn't the package manager for K8S, which is probably why it's kept separate. ActiveRecord was the ORM for Rails, before it was extracted and called "ActiveRecord" as a separate thing. Helm, however, is coming from the opposite direction -- and until it becomes something you can't use K8S without, then I'm happy to see them kept separate.


As someone new to Kubernetes, Helm is _really_ not appealing; it masks a _ton_ of complexity, and is akin to telling someone new to Docker to run all their applications off the 'Docker Library' official images on Docker Hub.

With few exceptions, those cookie cutter examples are not ideal for running production applications; they're decent starting points for learning how they could be built, just like Helm could be (if it were a little less "here's a 1,000 line Kubernetes configuration file in a box, good luck!"-ish).


I have Kubernetes in Action on my desk and I haven’t cracked it open yet because Kubernetes seems monstrously complex. Sure helm gets you up and running. When something goes wrong in prod at 3am, what do you do?


K8s is actually fairly simple and self-evident once you understand about 3 core ideas: etcd being the repository of state, in particular the spec; controllers with control loops bringing status into line with spec (the core mechanism in k8s, this is key); and a familiarity with the options on pods & deployments, for initialisation, service discovery, liveness, readiness, etc. that let the system make decisions globally while you only worry about local status (this is most of what you need to know as a dev deploying a service).

Don't buy the FUD. There's a lot of it about. K8s commoditises cloud providers. It's a strategic weapon against AWS lock-in.


This is my issue as well. I'm not comfortable running magic commands that create vast realms of infrastructure that I am responsible for but do not understand. It's one thing if it's all going to run on someone else's Kube, but even still, I need to be able to troubleshoot the apps on top, which are my responsibility.


What do you do now, without Helm or Kubernetes?



Correct, you can check out Kubeapps for a collection of apps packaged as Helm charts https://hub.kubeapps.com


"The number of things you need to create and worry about are a barrier to starting a project and iterating on it quickly."

As an SRE / Ops person it is my take that Kubernetes addresses many of the complexity your startup should be worrying about, but doesn't because "it is not a feature".

Yes, it is massively complex. That is because Ops is massively complex.


Kubernetes is the Android of the datacenter. Android OEMs are Samsung, HTC, Huawei, etc. Kubernetes OEMs are Google, Amazon, Microsoft etc. With Android, a large % of consumer market gains access to an app ecosystem by choosing Android. Similarly, businesses will _eventually_ gain access to an ecosystem of B2B applications by adopting kubernetes. Kubernetes will be the OS for business.


For a number of reasons, I think this metaphor fails.

K8s appears more open than Android, is not largely controlled by one corporation, doesn’t come crippled with vendor services, etc.


No metaphor is perfect. I highlighted where I thought it makes sense (in the ecosystem sense of apps and modules being built on top of it).


Basically agree that simple things should be simple and complex things possible - and that’s from my experience not yet met with standard k8s.

And having no „end users“ (app developers) on a conference about tools that should serve exactly these is an interesting observation to investigate further.

Having to install one mire tool to get ready for production apps installed with helm in one command is not asked too much, though.

Then, slighty unrelated, but it comes to my mind:

i wonder if this happy path thing works with influx, where the author is working.

Can i have a simple single command and install everything i need to look at logs from my app and db server, see most important performance stats and http/ip access logs, geaphical as well as with notifications if certain, easily to be entered ( and in case of cou, io, ram and diskspace reasonable defaults like 80% or so) thresholds are met?

Can i do that with only the free open source tools as the author expects it from the k8s ecosystem? Or do i get it when buying influx‘s professional service?

So maybe it’s the job of, and an opportunity for, commercial companies to develop and sell such simplifying tools. At some place, developers time to develop all these things must be paid for - if millions of developers just use the perfectly polished open source tools - and a high percentage doesn’t even help in development with big reports, not to think of patches, what are the developers going to live from while doing the polishing/ simplifying?


Author here. We have more work to do to make the happy path with the set of Influx tools (the TICK stack) more turn key and easier to run. The entire feature set is available in the open source versions. The thing we keep commercial is HA and scale out clustering (either we operate for you on AWS or you buy on-premise). Our work on 2.0 of the platform should make the happy path much easier, but software is a process of continuous improvement. So I'd expect 3.0 to be even better and so on.


Thanks! Sounds great and I‘ll look into it for the next project/use case!


Is Kubernetes really scalable to a meaningful extent? I feel like if I was going to set up a Kubernetes system, I'd need to plan from the beginning to have multiple clusters anyway, and then the utility of all the scheduling features would be considerably diminished since I'd have to plan to load balance applications across the clusters in some custom way. Yuck.

The Kubernetes website (1) currently claims that it supports clusters of up to 5000 nodes, which is a decent amount but not enough to avoid having multiple clusters. Does anyone have experience operating multiple production clusters in a single territory as partitions for scaling reasons? What's the experience like?

(1) https://kubernetes.io/docs/admin/cluster-large/


5000 nodes has proven to be a high enough ceiling for basically all workloads, especially given the size of individual servers available now in clouds. Easier to run smaller numbers of bigger servers, as always.

Also Kubernetes does support federation for cross-cluster deployments (now named multi-cluster). Some cloud services even support ingress load balancing across these, or you can do that part yourself by simple sticking with the same ports, but it all works fine today. Nothing custom needed.


If you have a need for more than 5000 nodes in a cluster, it should take a trivial amount of your resources to solve that problem IMO.

The vast majority of Kuberentes users probably won’t even need a 500 node cluster, let alone a 5000 node cluster. It makes sense for them to prioritize optimizing for other things like developer experience to help address some of the concerns mentioned in this article over supporting even larger clusters right now.


The other important distinction there is the current (soft-ish) limit of 110 pods per node; if you want to run hundreds or thousands of pods on a few giant (beefy CPU/RAM) nodes, you need to rethink that strategy and go with many smaller nodes, with the pods evenly spread across them.


Assuming C5 reserved instances, 5000 of those nodes is: $3 * 24 * 365 * 5000 * .5 (reserved discount) or $68M a year per aws region and 72 * 5000 or 360k cores.

That is pretty scalable.


nodes meaning compute nodes; not containers

in case there was confusion


I think people misunderstand why Kubernetes exists. It is the reverse OpenStack. Kubernetes has the potential to be the one unified API of the cloud. A middleware for proprietary cloud APIs. A few resources, like load balancers, are already at a point where you barely have to care about the underlying cloud provider. With operators and aggregated API servers (especially if they'll be offered as a service) provisioning resources could follow one well-known standard. Calling it now, within the next 2 years cloud database providers like Compose will offer a way to CRUD resources via CRD/apimachinery-compatible services. Few more years and we'll have a generic yaml spec for these resources that work out of the box on multiple providers (probably with a bunch of annotations that are vendor specific).


> available 99.5% of the time with decent alerting for operators to kick it

An operator should never have to "kick" a service. It should repair itself, except for the occasional hardware replacement if one is working with bare metal. And for anything that's being sold as a product, as opposed to an internal tool, I think 99.9% availability should be the minimum.

But I don't know enough about Kubernetes to say whether it's overkill at the scale of just a few servers.


> An operator should never have to "kick" a service.

Have you ever been a sysadmin? There are very few services that don’t need a kick every now and then.


Yes I have. And if a service ever needs a manual kick, I consider that a bug. At least when running in a public cloud.


Sure, sure, but... everything is buggy, by that metric. I think that's what the parent comment was getting at.


How many “kicks” does it take to build a reasonably resilient distributed service? A lot!


CAP theorem says it's impossible.


That isn't what the CAP theorem says at all. It says you need to pick your guarantees.

Additionally, resilience is in the eye of the beholder: might mean AP for a service that needs to be up at the cost of consistency or CP for a service that detects when it can't achieve quorum and fail gratefully.


I think https://draft.sh is trying to address this but it's still early in development.


Heroku / Cloud Foundry offer exactly what the author points towards: a very simple user interface for developers. InfluxDB will of course needs stateful applications so it's not a good use case for them.

The Cloud Foundry community has started exploring a switch from their own container management system to K8s. If that becomes real, CF would "just" become a nice user interface on top of k8s. The right move imho.


The cloudfoundry-incubator/eirini project is where the CF+K8s scheduler work is going on.

Related to this, SUSE and IBM have already released distributions of CF that run on Kubernetes.

I'm biased (I work on one of these) but I really think this is the most expedient way to enable PaaS features on K8s.


This is basically what OpenShift is. Highly recommended.


The way the author's 'scaffolding' idea should work is that you start but not using kubernetes at all, rather than using a easier version of it. Of course, 'not using it' already exists.. But what doesn't exist is the sweet way to transition.

In particular, you can start off with Azure AppService, AWS Elastic Beanstalk, Google AppEngine. You could also go serverless. All these approaches allow rapid development and deployment with low ops overhead, and they'll actually scale and heal well. Ultimately, the services are doing the k8s type of stuff for you. To state that inversely, running kubernetes is like trying to run your own PaaS. (When put that way, it sounds dubious that so many people are trying to jump into k8s, but I'm not an expert on the $$ economics of devops.)

The next gen evolution of the cloud platforms could really take this migration from PaaS to IaaS to a whole new level than it's at right now.


Firstly, I had the same experience on at kubecon this year, most people I encountered were ops engineers or infra engineers. Maybe the application developers were there but they were less vocal. I can also imagine the subjects being rather specific and deep for your average application developer.

Secondly, aren’t solutions like Helm supposed to take away the need for scaffolding? The problem with scaffolding is staying up to date with the new templates and usually results in the deployments not being updated anymore.

Additionally, I have to say that getting started with K8s was quite easy because we already had experience with Docker. OpenShift was similar and has source2build which is very convenient. So I don’t percieve K8s to be hard to start to use. To use it ‘correctly’ and use all potential, yes that is harder but that holds for any product.


Isn't one of the appeals of Kubernetes to have a portable cloud environment, which means I can easily switch between Windows Azure, Google Cloud, Amazon Web Services, on premise, and even Minikube (localhost). Is there any simpler alternative for that?


For many use cases, I'd imagine an abstraction atop of Kubernetes (like OpenShift or Rancher) is a good fit.


As an app developer, I can vouch for finding k8 to feel frustratingly complex.

My current client work has recently shifted to using k8. I took the time to get minikube working locally to get a better understanding. It definitely helped, but I find the layers of abstraction hard to grok after not working with it for a while.

I can see the value the tool offers, but I get the feeling it's supposed to be reserved for higher degrees of scale than the average 2-8 node app.

Black box is how I feel about it sometimes. Hopefully I'll get more one on one time with it in the future. It seems like a really cool technology.


Not having used it in any meaningful way, after a lot of reading I am sometimes still unclear with the value proposition.

The value imho could be in being able to package distributed applications and deploy across cloud providers or on prem, seamlessly.

I don't think this is true though short of doing a lot of effort to abstract access to a gcp/aws/azure managed service (say, a db), which is probably a bad idea.

If you take that away, then a lot of the replication, autoscaling, load balancing, failover etc. can be implemented using cloud providers without having to manage the complexity of k8s.

Hope to be proven wrong here.


This is exactly one of the reason we picked kontena.io in our startup, and never regretted it. Also super excited about their approach to run on top of k8 with pharos.sh.


TL;DR; Kubernetes needs to continue to focus on the developer experience but it's good enough for InfluxData's new cloud offering.

The project has been listening: https://github.com/kubernetes/community/blob/master/sig-apps...


Forming a SIG is just paying lip service. It remains to be seen if anything will really change or if kubernetes will suffocate itself with its own bloat.


> Scaffold generators for common elements would be great. Need a MySQL database? Having a command like [...] to create a stateful set, service and everything else necessary would go a long way. Then just a single command to deploy the scaffold into your k8s environment and you’d have a production-worthy database in just a few console commands. The same could be created for the popular application frameworks, message brokers, and anything else we can think of.

Rook sort of does this. You deploy a Rook operator, then just one other kubectl command to get an object store, database, shared filesystem, etc...

https://blog.rook.io/rooks-framework-for-cloud-native-storag...


This blog post starts under a false premise. Kubernetes is not for app developers, it is the substrate on which applications, databases, and other workloads run. Just like you wouldn't want an application developer SSHing into machines in production (assuming you have ops people), you don't want them to use kubernetes, except kubernetes has done one better -- it's abstracted so well (especially with the introduction and widespread use of Custom Resource Definitions AKA CRDs) that you can let them write resource definitions, which are declarative representations of the resources they will need for their application, and run those.

Coming from someone who gave a talk at Kubecon I'm very surprised to read something like this. Maybe I'm the one with the misunderstanding, but I'm going to try and refute the things this article said/is implying.

1. Kubernetes is complex

This is kind of right, but it's also kind of not -- Kubernetes is essentially complex, given that it encourages write-once solutions to all the problems it faces. Here are the pieces that make a basic Kubernetes "cluster":

- apiserver => you send commands to this to change/query cluster state

- controller-manager => works to make your ensure that the cluster is in the state you want it to be (making workloads replicate/restart/etc)

- scheduler => figures out where to put workloads

- kubelet => runs containers -- one on each node that can do work

- kube-proxy => maintains the routing infrastructure necessary to enable containers on any node to hit a container on another one.

All of those pieces are needed -- the only concession I would make is that they could all be in the same daemon (one executable), but that's actually worse at scale, and harder to debug -- all of these services can produce a lot of logs.

2. Application developers can't use kubernetes as it is

Application developers can use kubernetes as it is. Learning to write a kubernetes resource definition is not any harder than figuring out the conventions and configuration you have to write for Heroku, or AWS ElasticBeanstalk, or AWS ECS. In fact, I would argue that it's simpler.

We've touched on another problem here -- the competitor for kubernetes is not SSH, it's not heroku -- it's tools like CloudFormation/ECS. I don't know if you've CloudFormation, but it's kind of a clusterfuck, hard to set up quite right, and the dynamic yaml approach they've taken is enough rope for one clever developer to hang you and your whole team with.

Bold prediction, but I think AWS is going to abandon CloudFormation and ECS in favor of Kubernetes resources once it stabilizes.

OK, let's say you disagreed with everything I've said up until this point -- at the very least, you can deploy tools like the following to your kubernetes cluster:

https://gitkube.sh => heroku workflow

https://helm.sh => cloud-formation/elastic-beanstalk workflow (with kubernetes primitives)

And presto, you have a completely different interface to your cluster, WITHOUT changing anything fundamental underneath.

3. Developers who only focus on the application-level are the goal

Why would you even want this? Not only is it basically impossible to hide the underlying infrastructure so well that the application developer doesn't have to know about it, it's arguably not even a good idea.

Take session management -- if you want to handle it in the context of more than one frontend running at a time, you generally outsource that state to a cache like redis. An application developer who grew up in this imaginary world where app developers never touch infrastructure is not who I want solving this issue, assuming there isn't a qualified ops person. If you needed to optimize even further, app-local caches could be deployed, but this requires knowledge of "sticky sessions" -- this very much is a deployment/infrastructure specific question, again, that app-only developer is just about useless here.

I'm no hiring manager but the desire to stay an "application" developer who only worries about that part of the stack when the "application" as a whole is so much more would be a red flag for me. Even if you were delivering a desktop application, the developer who worries about underlying OS-specific enhancements (for example knowing how to optimize the app for MacOS) is the one I want, the one I want to pay the big bucks for.

4. It's hard to deploy the usual app+backing store+caching+worker pool structure

The author touches on this a little bit with the "maybe operators and helm charts solve this", and that's exactly what the operator pattern (Custom Resource Definitions, AKA CRDs, plus custom controllers) were meant to solve -- now you can actually give declarative specifications of what you want your Redis/Postgres/Celery/whatever cluster to look like, and `kubectl apply`, and the platform handles it. There's arguably no difference here between how you'd use this and a tool like heroku.

Also, for the record you can trivially extend `kubectl`: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plug...


> "the only concession I would make is that they could all be in the same daemon (one executable), but that's actually worse at scale, and harder to debug -- all of these services can produce a lot of logs."

This is already a thing, sort of, called hyperkube: https://github.com/kubernetes/kubernetes/blob/master/cluster...

The caveat is that each daemon has to be started separately still


hyperkube was what i used when I got my first cluster up and running (this was when coreos still hosted kubernetes baremetal setup guides not just pointing everyone to tectonic), I remember it fondly :)

I didn't include it due to that caveat, and the general feeling that the processes really are meant to be started separately. I haven't seen anyone try and run these processes with some sort of supervisor but it seems like that's not "the way" and wouldn't really even offer any benefits.


One major reason OpenStack is a messy project today is the amount of companies involved in thr foundation early on. It was a big problem when RedHat vs HP vs Rackspace vs XYZ.


These articles seem common, recently.

I don’t really understand what this sentiment is about.

It’s a really useful orchestrator, in some cases. In other circumstances, it’s unnecessary complexity.


The funny thing about Kubernetes is that I read somewhere that it's not actually widely used within Google (please correct me if I'm wrong). Given that's it's a complicated piece of software and that Google has extremely complicated requirements, that's pretty concerning. To me, we've barely starting to solve deployments, the idea of Kubernetes seems a little ahead of it's time. If you're using GPC I do hear it's really great. But then, what's the huge benefit of Kubernetes in the first place? A tiny bit less of vendor lock in?


Google uses Borg internally, and Kubernetes is really their third container orchestration system. After Borg came Omega, which was never deployed, but ended up being a test bed for a lot of innovations that were folded back into Borg. But Borg is a decade old and has a lot of warts (according to its designers), and with Kubernetes they aimed to learn from their mistakes and improve on the design.

As far as I can tell, the aim with Kubernetes was never to replace Borg at Google -- Google is far too invested in Borg, and it would take a considerable engineering effort to migrate away from it. Rather, developers at Google saw an opportunity to create an open source version based on what they had learned and help the world along in adopting the same engineering principles as Google has long practiced. Not all altruistic notions, of course -- Google benefits from the commoditization of containers indirectly, by undermining competitors such as AWS (where containers are still not well-supported) and making their own cloud the best fit for Kubernetes.

Google does run stuff on Kubernetes, via GKE. As I understand it, new products are encouraged to run on GCP. I don't know how many applications they run, however. Maybe someone from Google can comment.


The devs totally forgot that in a normal environment you don't have the 10.000 other internal Google tools, though.


If someone thinks kubernetes is complicated, they would cry during their first month with the borg.


I would argue that borg has much less initial complexity than kubernetes. Because of Google's integrated systems you can take any local binary and run it in borg with a five-line config file and one command. You can't get started that quickly with kubernetes.

If you understand the implications of running N replicas of a command on <= N machines in the cloud, borg does exactly what you want. Most of the people I met at Google complaining about borg just didn't understand how a program gets executed at all ... borg was the least of their problems.

Also borg is easier to pronounce.


You can use "kubectl run" in k8s for one line run. Now if it runs it runs. Go figure why your borg job is getting preempted.


You can use kubectl run to make something run, only after you've got all your initial authentication/authorization squared away, and your nodes are deployed. Assuming you work at Google you can "borgcfg up" your job and it will run under your role, make authenticated RPCs under your own authority, using your personal freebie quota which is infinite. You can get preempted; that's the price of infinite free resources. Anybody who complains about that should be fired.


Kubernetes is newer than Google's Borg infrastructure I understand.

https://kubernetes.io/blog/2015/04/borg-predecessor-to-kuber...


This is the ultimate key feature of Kubernetes. It’s 1000x better than a hosted closed source solution because you can run it entirely locally.

It’s amazing that the hosted option 100% mirrors the self hosted option which mirrors dev.


>I read somewhere that it's not actually widely used within Google

Borg and Container Engine:

https://www.quora.com/Does-Google-use-the-Open-Source-Kubern...


This is my personal experience on the matter as a DevOps consultant who periodically interviews in the traditional manner (as opposed to getting gigs from people that already worked with me).

Despite having 15+ years of *nix experience, including internals, having a track record of building large scalable infra and knowing a few different programming languages, what happened to me was this: I was getting filtered out because I didn't have Docker and Kubernetes and even (at one point) Cloudformation and/or Terraform. No problem - I learned those things (minus Kubernetes, so far) quite quickly. Much more quickly than the grueling trial-by-fire years of Unix administration. I like to know how things work, not just how to use them.

So if you wonder and worry about the state of enterprise IT some days, look no further than hiring managers themselves, who will pick a 25 year-old who writes YAML for some abstraction-of-an-abstraction system that does infrastructure under the hood, infrastructure that people kind of don't really try very hard to understand. After all, it's disposable thanks to infrastructure-as-code, right?

How do I know this? Well, I've seen shop after shop that's suffered a spaghetti infrastructure, using all the latest and greatest, from AWS and Kubernetes and Docker and other abstraction layers above AWS. And what happens is that it gets so complex that no one knows what's really going on, and at the very least two common symptoms arise: people are terrified during releases and they take hours, with many people on a call together very late at night; they spend a fortune on extra instances (in the case of AWS) because they haven't properly worked out environment separations (they had trouble keeping them the same, or one of many other problems).

A talented dev manager I used to work with used to complain that they had trouble hiring people who knew Javascript well, but they had expert after expert of some fancy JS framework try to interview, unable to answer the fundamentals-types of questions. I think it's similar with enterprise infrastructure.

I don't know what to say. I hope things go full swing and people who know how things work under the hood can charge consulting dollars for fixing the fuckups. It's not enough to know YAML, you also need to have wisdom in maintaining complex infrastructure, understand the delicate balance between change and stability, and be able to troubleshoot when it goes wrong WITHOUT just 'rinsing and repeating' where you learn absolutely no lessons at all.

[edit:] One theory for all of this. Some of the big shops (Google, FB, Netflix, etc.) did it right, and now everyone is trying to copy the style of infrastructure management, except doesn't have the talent or wisdom to do it well.


Kubernetes looks to me like one of those prototypical technologies where LEGO-style usage of Deep Learning in helping to set it up for whatever scenario is needed would be already doable and beneficial. I am wondering if Google is working on it already.


kubernetes is a classic case of a tool designed for consultants and companies to sell consulting services (Including cloud services, which is why every cloud provider leapt onto it).

In like 90% of the cases when someone used Kubernetes, Docker Swarm would have easily sufficed.


Disagree. OpenShift and CNCF are arguably exactly that, but Kubernetes itself isn't. It came out of the engineering team at Google, and its technical merit shouldn't be confused with the considerable marketing effort being put behind it.

Docker Swarm is much more deserving of this kind of cynicism -- a weak, badly designed solution forced on users by a company that's realizing their invention has been commoditized and is no longer a platform they control. Swarm was redesigned at one point to work more like Kubernetes because they realized it was a much saner model.

Kubernetes has more complexity, but it does scale down to single-node clusters.


I disagree. Kubernetes came out of Google, but has exploded in popularity due to its capability (which comes with extreme complexity): it can scale to extreme levels, but wrapping your head around it requires far more time and trouble for your most basic apps. Thus, you see the software consultants race to become the next Kubernetes experts, since choosing to deploy it essentially requires that you have dedicated professionals managing it (as Google does).

Docker Swarm might be incomplete and missing quite a few features, and it won't scale to thousands of nodes with dozens of independent apps, but the use-cases I've seen are far from that. A single engineer can basically get a "good-enough" moderately scaled system going.

If you have Google-scale problems with Google-caliber engineers and SREs backing you, use Kubernetes. Otherwise, using something else (Docker Swarm "just worked" for the cases I've seen) is easier.


If you can use docker swarm you can use Kubernetes. At that point k8s is just plain better.

The core of Kubernetes is super simple and all the hard parts are hidden away on actually setting up and maintaining kubernetes the hard way on bare metal machines. Odds are, if you're doing that, you have the resources to take some time to dive deeper into how it works.

I've done multiple single engineer Kubernetes setups that are working on production today. So far I've had only a few problems with it. I know it's not a huge data range, but I'm not the smartest person in the world (or a true infrastructure guy) and still found it easy to work with.

Swarm has always been a rushed afterthought IMO. Although I have way less experience with it than k8s, so I'm biased on this.


I had heaps of issues with Kubernetes.

I moved to Swarm a year ago and have had a grand total of 0 issues. It can do everything I need.


This is actually not correct and I'm a little surprised at the comment.

The ingress story in kubernetes is bring-your-own and usually people run their clusters behind Google or AWS LB which are supported as ingress in k8s. Running k8s on metal is a super daunting task. Choosing your network plugin is another task and usually you have to install a different system service on each of your nodes - Swarm has this built in.

And lastly the Compose file format. 10 lines of Swarm compose file literally translate into multiple dozens of separate k8s yml file. Creating this in k8s is super complex in itself.

Swarm is really pretty nice for someone setting up a few dozen nodes and services.


Why is this so heavily downvoted? A clear, legit opinion to have.

In my experience it's even like this: I have never seen any working kubernetes app. But I deployed my own hello world docker swarm app in 2 days. (And that is a fact, not an opinion. The opinion part is up to you.)


Because it's not remotely true, and is very cynical and biased without any evidence. k8s success isn't because of some consultancy marketing, it's because a lot of people are using it very successfully in production, myself included.


Oh didn't know we are in the comedy channel here.

A penguin walks into a bar, goes to the counter, and asks the bartender, "Have you seen my brother?"

The bartender says, "I don't know. What does he look like?"


Docker Swarm is a hot mess.


I've used it extensively and have been happy with it. I've never seen more fud in HN than in swarm vs kub threads.


This is bull. I use both and the difference I see is:

- Swarm does not have namespaces, a swarm cannot be divided into multiple independent ones.

- Swarm CE does not have per-operator Auth

And

- k8s has more moving parts

- has more complicated config files

To me swarm is to k8s like nginx is to apache. The former is lean and easy to setup, the latter has all the bells and whistles for enterprisey, service provider scenarios.


It might be and still succeed compared to its big brother.


I'm inclined to agree with you, but you need to substantiate your thesis with evidence.


If you are, so would you.


isn't docker swarm declared dead by docker team?



This is FUD


tl;dr: author believes “No”.

Betteridge’s Law still applies.


Copied from Wikipedia because I didn’t know:

Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word no." It is named after Ian Betteridge, a British technology journalist, although the principle is much older. As with similar "laws" (e.g., Murphy's law), it is intended to be humorous rather than the literal truth.


Not sure why the downvotes; merely pointing out that Betteridge’s Law is once again proven out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: