Hacker News new | past | comments | ask | show | jobs | submit login

Come on. What? Setting up a load balancer or nginx is considered complex now?



> What? Setting up a load balancer or nginx is considered complex now?

It's not complex to set up a load balancer in a given specific environment. But it's another kind of ask to say "set up a load balancer, but also make it so that the load balancer also exists in future dev environments that can be auto-set-up and auto-teared-down. And also make it so that load balancer will work on dev laptops, AWS, Azure, google, our private integration test cluster on site, and on our locally-hosted training environment, with the same configuration script." All of these things can be done in k8s, and basically are by default when you add your load balancer in k8s. They can be done other ways, too, or just ignored and not done, also. But k8s offers a standardized way to approach these kinds of things.


In my mind standardisation is how we really solve problems in much of software development. Everything before that is just practice and/or hacks.


> In my mind standardisation is how we really solve problems in much of software development.

I've been having this thought very often lately.

The only way for humans to do something faster is to use a machine. Any machine is built on some assumption that something is repeatedly true, that some things can be repeatedly interacted with in the same way.

Finding true invariants is very hard, but our world is increasingly malleable. Over time it is getting easier to invent new invariants and pad things out so that the invariant holds.


It's true not just for machines but engineering in general. Whether it's civil or mechanical or electronic or semiconductor engineering, their foundation is built on setting boundary conditions to make the natural world predictable so that it can be reliably manipulated. Things most often go wrong when those conditions are poorly understood, constrained, or modeled such as when using an unproven material, using imprecise parts, or ignoring thermal expansion when designing structural components.

Engineers have a plethora of quality control standards and centuries of built up knowledge to make this chaos manageable and the problems tractable.


I'm fine with standardization, as long as I set the standard. For standard 0, I propose that the only spelling for "standardization" will be with a 'z' (which is itself pronounced "zee").


American here too -- but don't think it's reasonable to demand American spelling from the rest of the Anglophone world.


I think the downvotes are missing the point. This is a key problem with standards: sometimes they standardize around something unreasonable. And tech is already riddled with all sorts of standards which are half-implemented in n different ways.

I think a better approach would be to have a specification for more robust negotiation protocols. When I see "standardisation," I already know that this means the same thing as "standardization" and furthermore that I should expect to see "colour"/"honour," organizations referred to in plural, "from today" rather than "beginning today" or "starting today," and even "jumpers" over "sweaters," "lorries" over "trucks," "biscuits" over "cookies," and more interrogative sentences in conversation. A British English speaker likely does the same process in reverse.


Perhaps I should clarify that "demanding American spelling is unreasonable" was specific to the context of HN (or other open discussion forums with international readership).

Within say, the volunteer-maintained documentation of MDN, the tradeoffs are quite different. There, ease of reading for reference by busy coders is much more valuable relative to ease of typing up a new contribution. Frequent switches between "color" and "colour" become a time-wasting distraction.

MDN should pick a standard and insist on it. And if Ubuntu chooses to require British spelling throughout, I'd say that's good.


Some of those requirements are far-fetched. Multiple cloud environments AND on-prem?

Ansible and Vagrant are not perfect, but I think they are far simpler than a single node k8s instance, and more representative of an actual production environment.


I’ve seen my company go multi cloud provider just to appeal to a single client. Now we’ll need multi cloud, multi continent setup to handle European clients. And I’m sure in another 2 years we’ll need our whole stack in China to support another clients requirements.

This is not my strength in any way, but hearing from those teams, Kubernetes will be a godsend


And one hour photo is now considered slow.


Uh, I have some news for you...


Setting up one is easy. Setting up one that gives multiple separate teams the ability to configure their services, and apply those changes to servers around the world and in different environments, repeatedly and safely, is harder.


We just spent months at my workplace working on a system to reliably define and configure a set of parallel silo'ed integrated datastores, services, and network stack within Kubernetes/ISTIO (and AWS), and to reliably upgrade new software revisions within those silos and to account for the changing "shape" of the configuration/content in these silos. It's repeatable and safe now, but it took a lot of effort.


Is any of your new stack open sourced?


Not yet ... perhaps some day.


There's huge difference between manually setting up load balancer - let's say HAproxy - and being able to just declare in an application that it needs "this and this configuration to route HTTP traffic to it."

The time I spent managing HAproxy for 5 services was bigger than the time I spent managing load-balancing and routing using k8s for >70 applications that together required >1000 load balanced entrypoints.

It's a lever for the sysadmin to spend less time on unnecessary work.


Depends who's doing the configuring. I don't know how to do it. But nor do I know how to use k8s for that matter.


Well, set me up one. :P


my pubkeys are below, send me an ip address and port to an ssh server

https://github.com/cameronnemo.keys


Anyone can ssh into a server and apt-get haproxy and tweak the configuration and get it "working" where the definition of working is accepting and routing traffic. But that's just a hobby setup. When people say setting up a load balancer is complex they are talking about professional setups, not a one off software install on a single server.


So give me a (possibly private) ASN, a couple bgp peers, and the frrouting suite. ACME is not difficult either.


But I want to be able to update my haproxy config with a git push, and roll it back with a single command, without sshing into anything, if something goes wrong. I want my everyday administration to be simple. Not the initial setup.


k8s did not invent gitops.


No but it sure as hell made it a lot easier and straightforward.


Now set it up in 30 data centers around the world, with the ability for dozens of different teams to add and change their applications, and across multiple staging and QA environments.


Why do I need 30?

1 rack in a datacenter is plenty for a database backed web app with a million users.


That is going to be killer latency for people in other parts of the world.

In my case, I work for a CDN, so we need to have data centers all around the world.


My what and what? :P I do get your point, it is "easy" for some definition of such, but to be fair, k8s would automatically put the ip and port in for my part of it all at least.


I never knew about this github feature. Nice!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: