Around a dozen years ago, my business was designing building and supporting physical infrastructure for startups.
One company was humming along nicely on $4000 of used hardware and a $2000 a month cage in a Colo facility.
Their business had ramped up and got some funding, so we got them another 60k worth of hardware in nother facility.
They onboarded still more customers to where they needed a few thousand dollars worth of SSDs to keep up with their random io demands.
But their new VC-installed CTO was like… No! We spent all this money on hardware and it’s not doing what we need it to! That’s crap! We’re gonna move to the cloud and save money.
So they moved to Amazon.
Their first month’s bill was $50,000. And it only went up from there.
The cloud is dandy for small workloads. But where you’ve got a consistently large workload the break even point on owning your hardware is a lot lower than most people think, even factoring in infra management expenses.
> Their first month’s bill was $50,000. And it only went up from there.
Well, I've read it's not uncommon to have salaries of 200-500k USD/year per programmer, on that scale may be it doesn't matter is it 50k spending on infra or 100k . For others such extra spending is _something_ though.
50k*12=600k, equivalent to 1-3 SWE employees; practically not even a single small team. It's nothing.
Individual contributors see these numbers and act like it's some huge amount of money but in the grander scheme of things when you're approving budgets for multiple teams of senior SWEs and other roles, it doesn't even register.
I’ve seen ICs lose promotions they were on course for and get wrecked on their bonuses for losing track of a single $50k/mo cluster in AWS and leaving it running though unused. At a FAANG company.
So, ICs who sweat that kind of waste are right to do so. If management sees that someone thinks 50k/mo is no big deal to squander, bet your ass there’s a good chance they’ll take at least a month’s worth out of that person’s annual bonus since that kind of money matters so little to them.
I’ll bet this isn’t apples to apples comparison and the bill was for a lot more resources.
Ec2 or Ecs Clusters per developer, leave experimental shiny cloud service running … etc
Around a dozen years ago, my business was designing building and supporting physical infrastructure for startups.
One company was humming along nicely on $4000 of used hardware and a $2000 a month cage in a Colo facility.
Their business had ramped up and got some funding, so we got them another 60k worth of hardware in nother facility.
They onboarded still more customers to where they needed a few thousand dollars worth of SSDs to keep up with their random io demands.
But their new VC-installed CTO was like… No! We spent all this money on hardware and it’s not doing what we need it to! That’s crap! We’re gonna move to the cloud and save money.
So they moved to Amazon.
Their first month’s bill was $50,000. And it only went up from there.
The cloud is dandy for small workloads. But where you’ve got a consistently large workload the break even point on owning your hardware is a lot lower than most people think, even factoring in infra management expenses.