Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes 1.18 (kubernetes.io)
160 points by onlydole on March 25, 2020 | hide | past | favorite | 125 comments



Although I understand the fear that many have of being left behind the technology curve by not having the time - or the chance - to run Kubernetes in their day to day work, we really must appreciate that Kubernetes really is the future of infrastructure. For those that are looking at Kubernetes with suspicion, it is a natural instinct to think of K8s as a threat to all the knowledge we have built in the past few years, but it doesn't have to be that way. So many things that we would have built ourselves (deployments, upgrades, monitoring, etc) can now be streamlined with K8s, and existing knowledge around those topics will just make our transition to Kubernetes faster (besides still being able to use most of our expertise within K8s anyways).

Managed solutions make K8s easy to use, while we can still benefit from being able to run out workloads left and right on any cloud vendor. In one word: portability. Which in the day and age of cloud vendor lock-in it is to be protected at all cost.

I know that some organizations allow to allocate time to explore new technologies and learn new practices. If your organization has no policy around this, it is worth try asking. Ultimately it will benefit the organization and the business as a whole, as they will be able to build a solid foundation to more rapidly transition and execute on their digital products. Kubernetes is good for business.


> K8s as a threat to all the knowledge we have built in the past few years, but it doesn't have to be that way

All this knowledge is still relevant. Kubernetes is an extra layer on top of all the classical technologies out there. It is not a replacement - besides some fancy experiments, it doesn't replace GNU/Linux.

If you're going to run a Kubernetes cluster, you still have to know all the "classical" networking, GNU/Linux system administration and architecture, etc. Having a fancy task scheduler for containers, and a tool that sets up clever network bridges (or whatever your preferred CNI flavor does), and a bunch of other nice extras does not remove the requirement to know what's underneath.

And in any non-standard situation, there are chances that you'll have to build things from scratch. For example, I do want to run a K8s cluster over a cjdns overlay network (which doesn't have IPv4, so Flannel or Weave won't work). Haven't figured this out yet.

And if you're not setting up and managing the infrastructure - it's about the same as it had always been. Just a different user interface to manage your deployments/storage/networking.

> Managed solutions make K8s easy to use

I would disagree. "Using" is the same (that's the whole point of K8s). "Install and maintain" is easier, but only in a sense that you don't do this. ;)


> All this knowledge is still relevant. Kubernetes is an extra layer on top of all the classical technologies out there. It is not a replacement - besides some fancy experiments, it doesn't replace GNU/Linux.

> If you're going to run a Kubernetes cluster, you still have to know all the "classical" networking, GNU/Linux system administration and architecture, etc. Having a fancy task scheduler for containers, and a tool that sets up clever network bridges (or whatever your preferred CNI flavor does), and a bunch of other nice extras does not remove the requirement to know what's underneath.

This so much. I went from one year as a junior HPC cluster admin (glorified title, I was a software builder and then focused on container usage) to a 6 month internship where I was focused on being part of an OpenShift team. I’m fairly good am maneuvering my way around a system and getting things working, but being thrown head first into that I realized how little I actually understood about systems, particularly networking related. I didn’t have a lot of time, and I learned a lot about OpenShift and K8s in general, but I felt more like an advanced user who could explain things the others trying to learn their way around and build small tools rather than an admin of the platform. Maybe I’m selling myself short and experiencing imposter syndrome mixed with being dumped into huge, pre-existing, and foreign infrastructure, but it was an eye opening experience.

Since that’s ended I’m at a new gig as a “standard” sysadmin. I’m using this to skill myself up and take the time to really understand as much of the layers and how they work together as I can, both on and off hours.

I’d love to get back into the K8s area, it’s such a fascinating workflow and paradigm, but I have some personal progress to make first.


I am making my money with Kubernetes and the ecosystem around it and I wholeheartedly disagree on K8s being the future.

However I do agree on vendor lock-in. At least to a degree. KVM and Xen didn't really prevent this for cloud environments.

I do think the main takeaway from a lot of this is the way software is designed. It is becoming more self-contained. Docker in many respects is like a static binary. FatJARs are a similar approach. Also Go in general seems to go that path.

What Kubernetes really does is providing an agreed upon standard for infrastructure, similar to what Docker gave for software packages.

They enabled concepts like Unikernels to at least become interesting, because they smoothed the way for the thinking of software that should be self-contained.

I think the future really is one where Kubernetes and Docker are just annoying overhead, where we find it odd how something "emulates" them, just how terminal emulators emulate... well, terminals.

We are in a feedback-loop where we put a more and more tight corset on the software we develop. First there were compute clouds, where developers learned it's bad to keep state around, then there was Docker, then Kubernetes where certain best practices, that have been best practices for a long time "forced" them to be followed more and more, especially because whoever provides your infrastructure and the developer are able to agree on the interface.

Docker and Kubernetes are standards due to their dominance, similar to Internet Explorer back in the days and Chrome today. As of now there is only minimal written specifications. Most of it is standardized by the implementation. Hopefully this will change some day to stabilize the interface and give opportunities for competing implementations, so more innovation can happen outside the boundaries of these projects, allowing for competition.

Maybe this has a positive influence on complexity as well.


I'm sorry but this reads like some mix of a sales pitch and religious preaching.

K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring, etc.." it just black boxes them. It also assumes out of the gate that everything needs to be able to do HA, scale for 1,000,000 instances/s etc...

Over and over and over people show examples (I'm guilty too) of running internet scale applications on a single load balanced system with no containers, orchestration or anything.

So please stop preaching this as something for general computing applications - it's killing me cause I've got people above me, up my ass about why I haven't moved everything to Kubernetes yet.


> K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring, etc.." it just black boxes them.

Kubernetes does not black box anything. At most it abstracts the computer cluster comprised of heterogeneous COTS computers, as well as the heterogeneous networks they communicate over and the OS they run on.

I'm starting to believe that the bulk of the criticism directed at Kubernetes is made up by arrogant developers who look at a sysadmin job, fail to undertand or value it, and proceed to try to pin the blame on a tool just because their ubriss doesn't allow them to acknowledge they are not competent in a different domain. After all, if they are unable to get containerized applications to deploy, configure, and run on a cluster of COTS hardware communicating over a software-defined network abstracting both intra and internet then of course the tool is the problem.


It's the exact opposite. I don't think that's stuff should be abstracted away.


> It's the exact opposite. I don't think that's stuff should be abstracted away.

Why not? The Kubernetes/serverless/DevOps people have a compelling argument--organizations can move faster when dev teams don't have to coordinate with an ops/sysadmin function to get anything done. If the ops/sysadmin/whatever team instead manages a Kubernetes cluster and devs can simply be self-service users of that cluster, then they can move faster. That's the sales pitch, and it seems reasonable (and I've seen it work in practice when our team transitioned from a traditional sysadmin/ops workflow to Fargate/DevOps). If you want to persuade me otherwise, tell me about the advantages of having an ops team assemble and gatekeep a bespoke platform and why those advantages are better than the k8s/serverless/DevOps position.


One of the things I see ignored in these discussions is the strategic timeline. Yes, dev teams can yeet out software like crazy without an ops team. But eventually you build up this giant mass of software the dev team is responsible for. Ops was never involved until one day the mgmt chain for the dev team realizes he can free up a bunch of capacity by dumping his responsibilities onto ops.

IMO, some of these practices come from businesses with huge rivers of money who can hire and retain world class talent. I’d like to see some case studies of how it works when your tiny DevOps team is spending 80% of their time managing a huge portfolio of small apps. How then do you deliver “new, shiny” business value and keep devs and business stakeholders engaged and onboard?


I might be misunderstanding you, but this line makes me think you misunderstood the k8s/serverless/devops argument:

> your tiny DevOps team is spending 80% of their time managing a huge portfolio of small apps

In a DevOps world (the theory goes), the DevOps team supports the core infrastructure (k8s, in this case) while the dev teams own the CI pipelines, deployment, monitoring, etc. The dev teams operate their own applications (hence DevOps), the "DevOps team" just provides a platform that facilitates this model--basically tech like k8s, serverless, docker, etc free dev teams from needing to manage VMs (bin packing applications into VM images, configuring SSH, process management, centralized logging, monitoring, etc) and having the sysadmin skillset required to do so well [^1]. You can disagree with the theory if you like, but your comment didn't seem to be addressing the theory (sincere apologies and please correct me if I misunderstood your argument).

[^1] Someone will inevitably try to make the argument that appdevs should have to learn to "do it right" and learn the sysadmin skillset, but such sysadmin/appdev employees are rare/expensive and it's cheaper to have a few of them who can build out kubernetes solutions that the rest of the non-sysadmin appdevs can use much more readily.


I approached K8s specifically from the point of Ops. It simplifies the story of supporting many applications immensely, in fact my first production deployment was done explicitly due to that reason - we have over 60 applications, we can't really reduce that number without unholy mess of rewrites that isn't certain to reduce that number at all.

Come kubernetes, and we have a way to blackbox developer excesses, push 12 factor onto it, and generally out of over 60 present apps, we have reduced our workload to really caring about maybe 5 classes of them, as they are commonalized enough that we can forget them most of the time.

At different job, we're pushing heavily towards standarized applications, to the point of Ops writing frameworks for the devs to use - thanks to k8s we get to easily leverage that, compared to spending lots and lots of time on individual cases.


> It also assumes out of the gate that everything needs to be able to do HA, scale for 1,000,000 instances/s etc...

k8s makes no assumptions about your workloads, it just gives you tools. And it's super useful even if you don't need to do HA or scale to a million instances.

Most production apps still need to manage deployments and rollbacks, configuration, security credentials, and a whole bunch of non-scale related things. And k8s makes a lot of that significantly more manageable.

Of course, this is overkill for a single application, but as you start adding more applications that need to be managed, the benefits really start to add up.

If you give something like GKE a chance, you might be pleasantly surprised. :-)


> K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring, etc.." it just black boxes them. It also assumes out of the gate that everything needs to be able to do HA, scale for 1,000,000 instances/s etc...

What I've seen, anecdotally, is that many ops-background people don't "get" why kubernetes is such a big deal. They assert rightfully that they can already do everything, they already know how to do everything, and they can do it without the overhead (both cognitively and in terms of resource utilization) of k8s.

But, if you are writing and deploying code - especially if you're not in a terribly agile organization - k8s eases so many real pain points that "old" models have which ops teams may be only vaguely aware of. If you need a certain dependency, if you need to deploy any new software, an entire new language or approach, if you need a new service, you now have the ability to directly do it immediately.

I can't tell you what a big deal it is going to be for a developer at a random bigco to be able to run their code without waiting for ops to craft a VM with all the right bits for them.

k8s solves real problems. If you have a monolith and need to solve how to scale it, that's not where k8s shines. But with lots of small workloads, or dynamic workloads, or existing dev vs ops organizational hurdles, it can really be a game changer.



Nice post! But can you really compare AWS with k8s? With kubernetes, most companies still won't want to run it. They'll either use VMs or (where possible) a managed kubernetes. That's just one product for AWS. In the end, not everything will be k8s, you still need DNS, object storage, block storage, etc. I don't see why AWS couldn't thrive in a scenario where everyone uses k8s.

Quite the opposite. k8s isn't easy to set up, run or maintain. A large company running clusters with millions of nodes is probably more capable of letting it appear smooth than some small hoster with only a few servers.


> we really must appreciate that Kubernetes really is the future of infrastructure

No, it isn't. It's extra complexity most don't need.


Oh yeah? How are you (or your company) running their applications?


Reminder to everyone that unless you have a truly massive or complex system, you probably don’t need to run K8s, and will save yourself a ton of headaches avoiding it in favor of a more simple system or using a managed option.


Not sure why this disclaimer has to be posted every time there's a discussion on K8s. It is a tool, if you need to use it, do use it. If not, don't.

Although I would argue that you need to know what trade offs you are making if you have the right use-case (multiple containers you need to orchestrate, preferably across multiple machines) and you are not using it or a similar tool. There are lots of best-practices and features you get out of the box, that you would have to implement yourself.

You get:

* Deployments and updates (rolling if you so wish)

* Secret management

* Configuration management

* Health Checks

* Load balancing

* Resource limits

* Logging

And so on(not even going into stateful here), but you get the picture. Whatever you don't get out of the box, you can easily add. Want Prometheus? That's an easy helm install away.

Almost every system starts out by being 'simple'. The question is, it going to _stay_ simple? If so, sure, you can docker run your container and forget about it.


I would stress the out-of-the-box support for blue-green deployments in a system that is fully versioned, thus supports rollbacks like a champ.


This is news to me, do you have any links to k8s's support for blue-green deploys?

I've been holding off setting up a system like Spinnaker because I'd read it was coming (in the form of custom deployment strategies), but can't find anything current on the subject.


At the service level Kubernetes offers deployments

https://github.com/kubernetes/kubernetes/tree/master/pkg/con...

At an application level then the strategy consists of having two applications deployed and update the application's ingress after the deployment controller finishes updating the deployments.


I like the idea of kubernetes, and (some time ago) worked through a basic tutorial. My main confusion is how to set up a development environment. Are there any guides you could suggest that cover basic workflows?


Minikube is super easy to use, or k3s.


You can migrate docker deployments to K8s just by adding the parts you were missing, so when in doubt, it always makes sense to start with docker, docker-compose, and only consider K8s as an alternative to docker swarm.


In practice, I found docker to be much more brittle than kubernetes, even when kubelet uses docker underneath. K8s minimizes the amount of "features" used from docker, and unlike docker is properly designed for deployment use out of the box (and generally has cleaner architecture, so figuring out server failures tended to be easier for me with k8s than with docker daemon)


I actually spent the past 3 days attempting to migrate my DIY “docker instances managed by systemd” setup to k8s, and found getting started to be a huge pain in the ass, eventually giving up when none of the CNIs seemed to work (my 3 physical hosts could ping each other and ping all the different service addresses, but containers couldn’t ping each other’s service addresses).

That said, if anyone REALLY wants to go the k8s route, it seems like starting with vanilla docker did allow me to get 75% of the work done before I needed to touch k8s itself :)


This should be really easy using k3s.


This false information really needs to die. k8s is a sane choice in many cases not just hyper scale. Regular business apps benefit from rolling upgrades and load balancing between replicas. Managed k8s platforms like GKE make cluster management a breeze. Developer tooling such as Skaffold makes developing with k8s seamless. I expect k8s to continue growing and will soon take over much of the existing Cloud Foundry estate in F500 companies .


Running k8s is much harder and takes more time than just having a few VMs with docker on it. Many applications never need to scale. I really like k8s from a user perspective but it's no easy task to set up. And managed solutions don't always work for everyone (and aren't always cost efficient).


If you run the whole thing (app + k8s) then I do agree with you that it's more complex and you're likely be better off without it.

But, k8s offers a very good way to split the responsibility of managing the infrastructure and managing the applications that run on top. Many people work in medium to big corporations that have a bunch of people that are in charge of managing the compute infrastructure.

I certainly prefer k8s as an API between me and the infrastructure, as opposed to filing tickets and/or using some ad-hoc and often in-house automation that lets me deploy my stuff in a way that is acceptable for the "infrastructure guys".


Most companies will have an Ops team to manage the prod clusters


When I think about k8s complexity, I can only understand this argument if I'm the one dealing with the infrastructure required. If I have to install k8s in my servers, then I'll probably need to think hard about security, certificates, hardware failures, monitoring, alerting, etc. It's a lot of work.

However, if I use a managed k8s service, I probably don't have to think about any of that. I can focus on the metrics of my application, and not the cluster itself. At least, that's how I think it should work. I haven't used k8s in a while.


Honestly getting a cluster running is the easiest part. It’s all the add one like istio that make it complicated.


You don't need Istio if your application is simple. I think Istio makes more sense when your application makes heavy use of peer-to-peer pod connections. If you can get away with a simple queue as a bus, it should remain simple. I think!


> Honestly getting a cluster running is the easiest part. It’s all the add one like istio that make it complicated.

That's like saying running windows is the easy part, it's all the add-ons like Microsoft office that make it complicated.


I've enjoyed learning K8s for my not massive nor complex personal workload.

It's running and is more hands off than without it. I'm using a managed digital ocean cluster. I no longer have to worry about patching the OS as it's all handled for me. I also don't have to worry about having a server with a bunch of specialized packages installed, although I suppose only using containers could have gotten me that far.

I haven't had a ton of headaches. So, I guess people's experiences may differ.

It's interesting to me that K8s always draws out the "you probably don't need it" comments.


People say this every time anything related to k8s gets posted and I always wonder who it’s addressed to. The system doesn’t actually have to be that complex for kubernetes to be useful and kubernetes isn’t that hard to run. We’re in the process of switching from ecs to kubernetes and while it’s not an easy thing to make ready for production, it enables so much that wouldn’t even be possible with ecs.

To me this advice is only useful for tiny startups running a handful of web servers.


There is one more advantage vs ECS: there is no longer any lock-in. You can have more capabilities, using standard solutions that work anywhere you want.


What does it enable compared to ECS?


I think the definition of "massive or complex system" varies between developers with different backgrounds, in your opinion, what's considered truly massive and complex system that may require Kubernetes?


What are recommendable simpler system examples, or managed options you allude to?


Google Cloud Run, AWS Fargate, Google App Engine, Heroku etc. are comparable experiences to Kubernetes if you have the flexibility of (1) running on cloud (2) not having to configure host OS or rely on host GPUs etc.

Disclaimer: I work at Google Cloud Run.


Since you mentioned about CloudRun, I had one query.

I run docker compose locally for development. For prod, I just use a different docker compose file (with some values changed, for example the postgres database url etc.). I do this from a 5 USD per month droplet/vm. I can launch multiple services like this for my microservices platform. I can use a hosted database solution for another 15 USD per month, to get backups etc. Also I get a generous 1 TB bandwidth and predictable performance for my system as a whole.

In the past I have used appengine and been bitten by their update times (took more than 20 mins for a single code update, things could have improved now). Also I need to write deployment artifacts for each service.

Now is there any benefit that cloud run (or any paas) could offer compared to this ? Would it be not easier to just stay with docker-compose and defer upgrading to kubernetes until you turn profitable or become unmanageable with a single VM ?


You have vendor lock-in with all of these though


Not really, no.

Cloud Run implements the Knative API, so you can actually take it away and run it on any Kubernetes cluster anywhere on a cloud or in your datacenter.


Doesn't that require Istio and a few other things as well? I wouldn't wish Istio configuration and maintenance on anyone but the larger shops/teams. You might not be technically locked in, but not everyone can afford to dedicate the attention to tuning and maintaining Istio.

Source: at mid-sized company who has Istio in the stack.


Knative just needs a gateway (LB); not a full mesh. Istio is the default option. Alternatively you can use Gloo, Ambassador or something specifically built for Knative such as Kourier.


Vendor lock-in does not come from using GKE/EKS/AKS or derivatives.

What ends up happening is that your application consumes services (storage, analytics, etc.). You start using those services from the cloud provider, which makes sense as long as it is the right thing for your business (aligns w/ your cloud-native blueprint).

Kubernetes, by itself, is cloud-provider agnostic.


Hello! Cloud Run would be better if it supported Cloud IAP and wildcard managed certificates.

Overall though it is an amazing service and it is really fast and easy to use.


Does cloud run or fargate have an equivalent to a statefulset? Persistent storage and the ability to talk to a specific node/instance?


Thanks; Fargate was what i was anticipating, or Heroku, hadn't looked at Cloud Run but definitely will.


This comment makes no sense.

There is AWS Fargate for Kubernetes, AWS Elastic Kubernetes Service, DigitalOcean Kubernetes, Google Kubernetes Engine etc.

All of which are on the cloud and all of which don't require you to configure host OS etc. Some offer full control over node configuration e.g. EKS whilst others manage that for you e.g. Fargate.


Thanks for repeating what I already said. I responded to the question asked. OP asked if there are simpler alternatives. These are simpler alternatives to Kubernetes.

I don't know which part you're not getting, but it appears that this person's intention is not to learn Kubernetes or deal with nodes in the first place.


AWS SAM has been great for running rust in AWS Lambda + API Gateway and for managing tables in DynamoDB.


thanks


Nomad and Consul from Hashicorp.


which is way more complex to setup than k3s


Perhaps installation and config are not as streamlined as one of the many available k8s setup tools, but the tools themselves are much easier to understand and less broad than the k8s system.

Used nomad/consul/fabio at a previous job for running containers, it was very easy to adopt. Way less new concepts as well.

It's worth mentioning I later chose GCP managed kubernetes for a small cluster to run at my startup. I had to learn a few new things, but I'm not familiar of any "nomad as a service" offerings, so I went with k8s on Google cloud.


When was the last time you used it? I found it straightforward and not so many knobs to twist. ACLs and cert management can be a bit of a PITA the first time (especially the former, and it’s something you really want to do before you’re replying on Consul for mission-critical stuff), but that’s about it. Still, those two things are mainly confusing due to poor documentation and not that complicated once you get up to speed, so it’s mainly the first time you do a new setup.


Downvoters: Care to elaborate what you think is wrong with my comment?


Yeah, why would anyone with a simple system want an elegant way to declaratively describe the state of their runtime?


Anyone solved properly the CPU Throttling issues they are seeing with kubernetes ? Is this release solved it? We are seeing a lot of throttling on every deployments, what impact our latency, even when setting a really high cpu limit. The solutions seems to be:

- remove the limit completely. Not a fan of this one since we really don't want a service going over a given limit...

- using the static management cpu policy [2] Not a fan because some services doesn't need a "whole" cpu to run...

Anyone has any other solutions? Thanks!

[1] https://github.com/kubernetes/kubernetes/issues/67577

[2] https://kubernetes.io/docs/tasks/administer-cluster/cpu-mana...


It's a bug in the linux kernel that was introduced in 4.18 and later fixed. You might be ok if you're on a later or newer version.

The symptom primarily seems to manifest that you get heavy throttling even though you haven't actually gotten to your cpu limit yet.

If you're just seeing heavy throttling AND its pegged at the limit, you haven't necessarily hit the issue and should raise the limit first and observe.

Also don't forget to eliminate other possibilities. We initially thought we were experiencing this issue and later discovered the app was just memory constrained and extremely heavy GC during certain operations was causing the throttling.


1. It makes no sense to quota your cpu with the exception of very specific cases (like metered usage). You’re just throwing away compute cycles

2. Same applies to dedicate cores for pretty much same reasons

Having said that if you really really want quota but don’t want shit tail latency I suggest setting cfs_quota_period to under 5ms via kubelet flag


> It makes no sense to quota your cpu with the exception of very specific cases (like metered usage).

This is not true at all. Autoscaling depends on CPU quotas. More importantly, if you want to keep your application running well without getting chatty neighbors or getting your containers redeployed around for no apparent reason, you need to cover all resources with quotas.


Agree re noisy neighbours, but autoscaling depends on _requests_ rather than _limits_, so you could define requests for HPA scaling but leave out the limits and have both autoscaling and no throttling.


The problem with having no throttling is that the system will just keep on running happily, until you get to the point where resources become more limited. You will not get any early feedback that your system is constantly underprovisioned. Try doing this on a multi-tenant cluster, where new pods spawned by other teams/people come and go constantly. You won't be able to get any reliable performance characteristics in environments like that.

For such clusters, it's necessary to set up stuff like the LimitRanger (https://kubernetes.io/docs/concepts/policy/limit-range/) to put a hard constant bound between requests and limits.


And how will you get feedback on being throttled other than shit is randomly failing e.g connection timeouts?


Effective monitoring. Prometheus is free and open source. There are other paid options.


That was a trick question actually - use your Prometheus stack to alert on latency sensitive workload with usage over request and ignore everything else.


Of course, you're missing the point. Depending on your application a little throttling doesn't hurt, and it can save other applications running on the same nodes that DO matter.

In the meantime you can monitor rate of throttling and rate of CPU usage to limit ratio. Nothing stops you from doing this while also monitoring response latency.

On the other hand CPU request DOES potentially leave unused CPU cycles on the table since it's a reservation on the node whether you're using it or not.

Again needs may vary.


You got it completely backwards. Request doesn’t leave unused cpu as it is cpu.shares, limit does being cfs quota that completely prevents your process from scheduling even if nothing else is using cycles. Don’t believe me? here’s one of kubernetes founders saying same thing - https://www.reddit.com/r/kubernetes/comments/all1vg/comment/...


Incorrect. If a node has 2 cores and the pods on it have request of 2000m nothing else will schedule on that node even if total actual usage is 0.

You can overprovision limit.

This is easy to test for yourself.


> Agree re noisy neighbours, but autoscaling depends on _requests_ rather than _limits_, so you could define requests for HPA scaling but leave out the limits and have both autoscaling and no throttling.

I've just checked Kubernetes' docs and I have to say you are absolutely correct. Resource limits are used to enforce resource quotas, but not autoscaling.


Autoscaling depends on requests not limits. Read my explanation on “chatty neighbors” in other thread.


Thanks! wouldn't that be an issue if a pod "take over" the node , if for some reasons a request use too much CPU?


Not really if you ensure every pod sets cpu request (which sets up cgroups cpu.shares) and your kubelet and system services are running in separate top-level cgroups (—kube-reserved and —system-reserved flags) you have reserved enforcement enabled. On full node contention every container will just consume its proportional share. This is not to say that someone malicious wouldn’t be able to dos a node but untrusted workload is a whole separate topic


I've seen the cloud agnostic nature of Kuberbetes mentioned in many posts here. This is only true on the surface for a significant number of use cases and deployment models.

Once petabytes of data are out there in your GCP or AWS environment, "portability" will be costly due to extortionistic pricing of egress bandwidth.


A lot of people think of multi-cloud as some kind of arbitrage where you jump quickly between markets. Running applications in cloud environments is a lot more like leasing property. Once you set up there are costs to moving.

The portability argument boils down to saying you are not boxed in. If things get bad enough you can move. This is a big long-term advantage for businesses because it means you can correct mistakes or adjust to changing business conditions. That's what most people who run companies are really looking for.


It also leads to nullifying the advantages of the cloud. The whole point are the proprietary services they offer, and use those instead of building them yourself. Trying to be „cloud agnostic“ is one of the biggest mistakes one could make.


I'd argue the whole point is actually the fact that you can lease CPU/Memory/space as you need it, and capacity constraints now become simple cash constraints. You don't need to shell out millions of dollars on a specialist enormous hardware just to be able to use it for an hour.

Lots of big companies that operate extensively in AWS/Azure/GCP don't go anywhere near the managed services they offer, because they end up being a horror show in terms of scalability, functionality and troubleshootability. Depending on your risk appetite, running Kafka on Fargate/EC2 is a lot more attractive than using Kinesis (for example).


You're deploying to clusters of x86 machines in the first place because people in the 90s realized they were a little too into the proprietary APIs of their big iron vendors.


At that scale it is probably a better option to look into something like AWS Snowball and ship it on physical drives.

At the petabyte scale there will probably be huge cost savings to using your own private cloud on your own hardware.


With petabytes you should either have a procurement team that gets a great deal or you should look at co-location. At some point, hiring a team of infrastructure experts is the cheap option, esp if you leave hardware management with data center providers.

And it's not mutually exclusive. I've seen companies working very cost efficiently by using dedicated hardware for 99% of the workload and AWS or GCP for any experiments where they wanted to try out new features.


Containers are not necessary if the systems are build with something like Guix or Nix as they provide transaction level updates to applications and dependency and secure by default as no need to run a daemon with root access to run, monitor and manage containers. They provide same way of managing application deployment, the way application source code is managed with versioned deployments and rollbacks all baked in.

But as with any technology Guix and Nix are still decade ahead of the present and may pick up later when technology converge back to running application servers and other dependent software in isolation with user level namespaces.

Kubernetes try to solve one problem and create 10 other infrastructure problems to manage and instead of working on application, tie the company to a specific distribution or cloud service provider. So far there is nothing revolutionary in it unless the startup or company adopting k8s is google size operations.

From a software developer perspective which is the main audience of HN it will be popular as most of them dream or wants to work for company of size google. Startup founders want to solve the scaling problem like google in the beginning as everyone dreams to be google size from day one. Kubernetes complexity is useful at large scale for majority i.e. over 90% of the deployments simple containers, bare metal or VM with traditional configuration management will be sufficient.


Kubernetes is not really that complex. It has primitives that cover a huge number of needs, so it can be complex when needed to solve complex problems, or simple when you're solving a simple problem.

I get the feeling people glance over the documentation and assume that you MUST use every single feature. Most of the features are only there when you need them for supporting advanced use.


As much as I like Guix (and like the idea of Nix but find it infuriating to actually work with), they don't solve anything that is covered by kubernetes, except possibly building of container images (as one can equate OCI containers with nix closures).

Where does Nix/Guix manage dynamic binpacking of applications to servers (preferably without a blood sacrifice on the altar of "infinite recursin")?

Where do they handle nicely (de)provisioning and attachment/detachment of special resources, whether those are filesystems, block devices, any kind of special device etc.?

How about integrating various special networking resources like load balancers?

No, NixOps and related do not handle it well, as they need static assignment upfront and can't dynamically adjust as the system runs. Even at my small scale, all those features above are crucial, and k8s let's me not worry about problems most of the time (Rarely we have a black swan, that pretty much always ends up being "not k8s fault that network died or nginx got hung")


Could you elaborate on your what your stack looks like and what tools you use other than Guix? What made you choose Guix over Nix?


Homoiconicity of lisp made me go with Guix over NIX. I work with lisp and guile felt much more natural, another reason is guix documentation is really very good.


Kubernetes is here. Better to learn it and use it where appropriate than to fight it.


I don't need to fight it, I am not looking for a job and follow the herd mentality. My own startup is pretty happy with Guix and related infrastructure on bare-metal, it works well for us and we can still stand on shoulders of giants who have done much better job in managing large distributed infrastructure.


That’s awesome. I’m not evangelizing it or saying you should use it as I did say “where appropriate” and at your startup it doesn’t seem appropriate.

There are probably 10x more engineers that have experience with k8s than with guix though so ... perhaps that could be a factor though not a reason to change out the infra completely.


The advantage of having your program reified as a YAML init+config + OCI image leads to easy resource-aware scheduling, auto-scaling, monitoring, etc.

Nix is great to build the image and manage changes to dependencies.


Anyone have a recommended guide for Kubernetes?


They're all bad. The official docs are the only place to go. Half of the time other resources are out of date or just plain wrong.

Start here: https://kubernetes.io/docs/tutorials/ Look at all the diagrams in the tutorials but don't bother with the interactive stuff.

Then learn the more detailed concepts here: https://kubernetes.io/docs/concepts/

Once you understand the lingo and the general idea of what the different functions are supposed to do, just copy some example deployments and try to get something of your own working.


I’ve been getting up to speed over the last few months.

Don’t do what I did: googling and going through random top hits. Most of these are 1-off blog articles that revolve around “install Helm and then do these 6 things” or “just kubectl apply random.link.com/some-script”.

Doing that just leads to tons of confusion and anger.

My recommendation: Suck it up and read through the official kubernetes docs. Their docs aren’t written in a way to easily explain core concepts unfortunately. However, slogging through it will give you some initial exposure to concepts and will let you know where to go back to later when your making mental connections.

Next, look for k8s tutorials from Digital Ocean and Linode. In my opinion, they’re the best written guides for demonstrating how to get from A to B.

You’ll start running through those guides and be referencing back to the official docs. Gradually bridges will form on your head and you’ll get an intermediate, functional level of competence.


There aren't any. Someone should write one. My beef with them all is they start from the top down, taking giant leaps of logic and assuming that everyone has the same use cases and motivations. In my personal opinion, humble though it may not be, the way to understand Kubernetes is from the bottom up. First of all, figure out how Linux starts your process. I've noticed that a lot of people who are afraid of k8s are afraid of it partly because they have no idea how a Linux machine works anyway. So start there. After you've got that figured out, learn about control groups. What kinds of resources can be controlled? How do you create and destroy control groups? How do you put processes in them? Same thing for namespaces. How do they work? What can a process see from its perspective inside a process namespace?

If you grok all that, then you are ready to grok the K8s Container Runtime Interface. And, if you really grokked it, you understand that Docker is besides the point, isolation and namespaces are optional, and so forth. All k8s demands from the "container runtime" is that a thing has a name and a defined lifecycle.

Once you understand how k8s works at the pod and node level then it should be perfectly obvious how it works at higher levels. That's why I really want to see someone write the bottom-up guide!


What would you like to learn about kubernetes? I'd be interested in helping. I don't have any knowledge in manually running a kubernetes cluster, BUT, if it's how to use it once you've spun up a managed kubernetes server, I'd be happy to help here (invitation open to others too). I'm not an expert by any means but I do work on our cluster at Buffer on a daily basis and can share knowledge I've acquired so far :) .


I don't know if it's updated for more recent versions, but I read "Kubernetes: Up and running" last year and it was excellent. It covers the motivations behind some of the decisions which helped things click for me.


Also read this book last month and it gave me a nice overview. But when doing all the examples you notice, that even with the updated second edition from the end of 2019, many of them are outdated. Not a big problem to solve, just small differences, but then you realize that Kubernetes is a fast moving target. A book about technology will always have this problem, but the K8s space seems to move especially fast currently.

Now I’m also more leaning towards the official docs as a recommendation, because they should always be more up to date... nevertheless, “Kubernetes: Up and Running” took my fear off this (at first) complex architecture. In the end, K8s is not that difficult to understand and the involved building blocks make sense, after you get the hang of it.

By the way, Microsoft is giving away “Kubernetes: Up and Running” second edition for free currently: https://azure.microsoft.com/en-us/resources/kubernetes-up-an...


I found the Udemy CKA / CKAD certification courses to be good, even if you are not interested in certification. Most of the books I have come across have been good as well but Kubernetes moves pretty fast so some of the YAML files might need to have slight tweaks.


https://github.com/alta3/kubernetes-the-alta3-way ansible playbooks based on "the hard way" but without being dependent on Google cloud


The KubeAcademy courses are nice little (free, no-reg) intros, and they have transcripts if you read faster than you watch: https://kube.academy/courses


Thanks for the link. The presentation looks very nice, I'm going to give the course a try, especially since it's free (I've been considering a subscription to either Linux Academy or A Cloud Guru).

EDIT: Upon further investigation, I see that there's not much technical content provided here. So it goes. Thanks anyways.


Kubernetes Deconstructed video https://vimeo.com/245778144/4d1d597c5e


I had to ragequit in the third minute. "A container is the filesystem inside your application." Just, no.

This is exactly the kind of word salad approach to explaining k8s that I complained about in my other comment. All of the k8s tutorials are written by (or recorded by) people with no idea what they are talking about.


Honestly, please record a better one. It's true that most tutorials are very confusing and mix up terms. It's very hard to find good docs. If you have the time, please write something, I'm sure it'll be appreciated.


we're building an interactive k8s learning platform at learn.msb.com


Podcast interview with the release team lead: https://kubernetespodcast.com/episode/096-kubernetes-1.18/


Really annoying that AWS EKS only got K8S 1.15 last week.


I've taken a couple k8s courses, I understand all the small parts that make up k8s, but yet it seems that there are still no easy solutions to install on bare metal. The default recommendation is to always just roll with a managed solution. This is slightly irritating considering there are plently of companies out there who own their own infrastructure.

There are plenty of great developer distributions out there (k3s, kind, minikube, microk8s), but those are single node only, and aren't meant for production use.

I'm still searching for a solid guide on how to get k8s installed on your own hardware. Any suggestions would be very appreciated!


https://www.ansibleforkubernetes.com/

I've found this pretty good starting point. Keep in mind the book is still a work in progress. Note, I haven't actually run k8s in production, but this book has helped me get something up and running in VM's pretty quickly using Ansible on top of kubeadm.


k3s supports multinode


Ah, I wasn't aware. Does it support HA as well?


Yes if you use an external DB for the k8s control plane

https://rancher.com/docs/k3s/latest/en/installation/datastor...


There's also an experimental embedded DQlite (raft + sqlite) thingie too!

https://rancher.com/docs/k3s/latest/en/installation/ha-embed...


Shamelesss plug: Keights, my Kubernetes installer[1] for AWS supports 1.18 (the latest version available on EKS is 1.15). Keights also supports running etcd separate from the control plane, lets you choose instance sizes for the control plane, and can run in GovCloud.

1. https://github.com/cloudboss/keights


How does it compare to kops or, dare I say it, kubespray?


Hi, I've often worked in corporate environments where it wasn't necessarily allowed to spin up a VPC, or to create an internet gateway. In the time I was creating this, many companies who are in healthcare or are otherwise locked down, could not use kops due to its requirement for an internet gateway. I created keights to fill that space, so that anyone could run Kubernetes in AWS, even in air gapped environments. This is pretty common nowadays, by the way - enterprises have a team to manage all AWS accounts, and they set up VPCs and connectivity ahead of time, before development teams get access to the account; access to the internet is through a proxy only, and no one can modify the network. Not to mention, most of the access to Amazon's services can now be done without an internet gateway, using VPC endpoints. Keights fits well in this world of locked down network access, and it works well even in GovCloud (you would need to build the AMI there as my public AMIs cannot be shared with GovCloud accounts).

Keights and Kubespray both use Ansible, however they do it in a very different way. (Disclaimer: I haven't used kubespray, only looked over the documentation). Keights uses Ansible roles to build CloudFormation stacks to produce a cluster. The nodes in the cluster bootstrap themselves using systemd services that are baked into the AMI; Ansible does not run on the nodes in the cluster. Kubespray, as I understand it, uses a traditional Ansible approach of pushing configurations over ssh to nodes in its inventory. To my knowledge, it does not actually build the machines in the cluster, it just configures existing machines. Keights does the full end-to-end automation to bring up a working cluster, including the creation of all required AWS resources (autoscaling groups, a load balancer for the apiserver, security groups, etc - though you do provide certain resources as parameters, for example your VPC ID and subnet IDs, due to the aforementioned requirements to fit into locked-down environments).


FWIWI a few days back I upgraded my microk8s-based k8s to 1.18 beta channel and it solved at least ImagePullBackOff problem that I had with pulling from my private GitLab registry. Worked like charm.


I'm pleased to see the changes in the HPA, having pod scale-up/down periods be tied to a systemwide setting was a bit painful.


This one? https://github.com/kubernetes/kubernetes/blob/master/CHANGEL... > autoscaling/v2beta2 HorizontalPodAutoscaler added a spec.behavior field that allows scale behavior to be configured. Behaviors are specified separately for scaling up and down. In each direction a stabilization window can be specified as well as a list of policies and how to select amongst them. Policies can limit the absolute number of pods added or removed, or the percentage of pods added or removed. (#74525, @gliush) [SIG API Machinery, Apps, Autoscaling and CLI]


Yep, in the version I'm on (1.15) there's only global flags and config[1] which apply to all HPAs, but not all apps should scale the same way - our net facing glorified REST apps can easily scale up with, say, a 1-2m window, but our pipeline apps sharing a Kafka consumer group should be scaled more cautiously (as consumer group rebalancing is a stop-the-world event for group members)

1: https://v1-15.docs.kubernetes.io/docs/tasks/run-application/...


Kubernetes? No thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: