Hacker News new | past | comments | ask | show | jobs | submit login

I think it completely matters - yes these orgs are a lot more wasteful, but there is still an opportunity to save money here, especially is this economy, if not for the internal politics wins.

I’ve spent time in some of the largest distributed computing deployments and cost was always a constant factor we had to account for. The easiest promos were always “I saved X hundred million” because it was hard to argue against saving money. And these happened way more than you would guess.




> I’ve spent time in some of the largest distributed computing deployments

Yeah obviously if you run hundreds or thousands of severs then efficiency matters a lot, but then there isn't really the option to use a single machine with a lot of RAM instead, is there?

I'm talking about the typical BigCorp whose core business is something else than IT, like insurance, construction, mining, retail, whatever. Saving a single AKS cluster just doesn't move the needle.


Yeah I see your point where it just doesn’t matter, especially back the the original point where it may not be at scale now, but you don’t want to go through the budget / approval process when you need it etc.

I think my original point was more in the “engineers want to do cool, scalable stuff” realm - and so any solution has to support scaling out to the n’th degree.

Organisational factors pull a whole new dimension into this.


I mean yeah, definitely - it blows my mind how much tolerance for needless complexity the average engineer has. The principal/agent mismatch applies universally, and beyond that it is also a coordination problem - when every engineer plays by the "resume driven development" rules, opting out may not be best move, individually.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: