I was curious so i had a bash at comparing the cost of just buying another server to throw at the problem vs telling a FAANG dev to optimise the code.
A dedicated 40core / 6Tb server is around $2k but will be amortized over the years of its life.
It needs power, cooling, someone to install it in a rack, someone to recycle it afterwards, ..., around $175/yr
A FAANG dev varies wildly but $400k seems fair-ish (given how many have TC > 750k).
So that's about 12 hours of time optimising the code vs throwing another 40c / 6Tb machine at the problem for 365 days.
The big cost i'm missing out of both the server and the developer is the building they work in. What's the recharge for a desk at a FAANG, $150k/yr ? I have no idea how much a rack slot works out at.
Unless i've screwed up the figures anywhere, we should probably all be looking at replacing Python with Ruby if we can squeeze more developer productivity!
Adding hardware doesn't improve single-request performance, so slow stacks can require a bunch of optimizing or caching work that wouldn't be needed on a faster one. At some point it also impacts productivity when the test suite is slow, the app takes a long time to restart, etc.
Sure it does, my home lab servers have a single thread performance approximately half that of a server today.
What’s the ping latency from US East to Europe? 80ms-ish? What’s a roundtrip to postgres with a regular business app type query, 20ms-ish? What’s the latency on a beefy rails app’s request handling, 40ms?
We’re talking 140ms best case for a slow stack. What can you get that down to with tuning work?
When your user comes along on their 4g connection with 800ms latency, will they be able to tell the difference?
Don’t get me wrong, I’d far rather invest time in making the stack efficient but from a business point of view, it might not make sense vs. just throwing hardware at the problem and spending your expensive engineering resources on making it possible to deliver more utility to customers.
A dedicated 40core / 6Tb server is around $2k but will be amortized over the years of its life. It needs power, cooling, someone to install it in a rack, someone to recycle it afterwards, ..., around $175/yr
A FAANG dev varies wildly but $400k seems fair-ish (given how many have TC > 750k).
So that's about 12 hours of time optimising the code vs throwing another 40c / 6Tb machine at the problem for 365 days.
The big cost i'm missing out of both the server and the developer is the building they work in. What's the recharge for a desk at a FAANG, $150k/yr ? I have no idea how much a rack slot works out at.
Unless i've screwed up the figures anywhere, we should probably all be looking at replacing Python with Ruby if we can squeeze more developer productivity!