Hacker News new | past | comments | ask | show | jobs | submit login

I think there's something missing here. In how we operate applications.

It's not unusual to run an application on a server where that server is using only 25% of its resources (e.g., ram and/or cpu). Then, there are many different servers running many applications all of which are under utilized.

There are more efficient ways to operate these applications that drive up power efficiency. Google brags about the power efficiency of their data centers[1]. They do a bunch of things worth copying.

There are ways to operate software better... not just develop the apps themselves.

[1] https://www.google.com/about/datacenters/efficiency/

edited to add missing link




Pre-cloud it was always pulling teeth to get new hardware provisioned. You had to get past the guy who has realized his only power is saying no by refusing to write checks, and then you have to get IT to provision the box, and there may be physical layout issues that complicate doing that in the best way possible.

It's very common for someone to take all of those servers using 25% and do manual bin-packing with them. It's also common for changing features and customers to alter the resources needed after the manual process is completed, leading to further rounds of bin packing.

And every time you relocate services or hardware, there's a chance for an outage. That can become a wall you can't climb or cost you social capital every time it happens.

My general impression of Kubernetes is it's trying to solve that social problem with technology. We know how that usually goes, but any port in a storm, right?


Most companies don't have control over their data centers (except to maybe use energy efficiency as a selection criteria). But you have a point about CPU.

But regarding incentive alignment, making those data centers more efficient is not just more green but also saves the companies a boatload of money.

Since cloud providers don't charge extra for your utilization percentage I've always told my teams: if the CPU is running cold either the software can use it more efficiently or we're overpaying by renting more capacity than we need.


A lot of companies still have control their data centers or some portion of them. At least to an extent to control how things are scheduled on hardware.

Kubernetes is similar enough to Borg (the datacenter/cluster OS that Google uses) to efficiently schedule workloads over a cluster. There are ways to use it anyplace one has a cluster for that purpose. This is just to provide an example.


I agree, you can architect your system to make better use of hardware for sure.

What I meant was: I can't tell AWS to make more efficient power switching or design an ASIC that is lower power. Best I can do is say that if they don't I'll move to another provider.

It's been a long time since I've worked with a company with any significant number of physical racks they rent/own where they can put their own hardware on them. Though, I admit, I may be suffering bias here and it could be they are more common and I just happen to work with companies that are cloud based.

Edit: I used to work for Lycos in the mid 2000s and we were just starting to do that (run our workload on VMs instead of metal so we could make all the servers multi-tenant. That was the last time I've worked with a physical server and the Lycos network (across all their sites) was the equivalent of a top 50 website at the time.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: