> IMHO those building greenfield solution today should take a hard look at whether the default approach from the last ~10 years “of course you build in $BIGCLOUD” makes sense for the application - in many cases it does not.
When one buys a house, they should take a hard loo at whether the default approach of paying for utilities makes sense, versus generating their own power.
While that's a bit snarky, the reasoning is similar. You can:
* Use "bigcloud"(TM) with the whole kit: VMs, their managed services, etc
* Use bigcloud, but just VM or storage
* Rent VMs from a smaller provider
* Rent actual servers
* Buy your servers and ship to a colo
* Buy your servers and build a datacenter
Every level you drop, you need more work. And it grows(I suspect, not linearly). Sure, if you have all the required experts (or you rent them) you can do everything yourself. If not, you'll have to defer to vendors. You will pay some premium for this, but it's either that, or payroll.
What also needs to be factored in is how static your system is. If a single machine works for your use-case, great.
One of the systems I manage has hundreds of millions of dollars in contracts on the line, thousands of VMs. I do not care if any single VM goes down; the system will kill it and provision a new one. A big cloud provider availability zone often spans across multiple datacenters too, each datacenter with their own redundancies. Even if an entire AZ goes down, we can survive on the other two (with possibly some temporary degradation for a few minutes). If the whole region goes down, we fallback to another. We certainly don't have the time to discuss individual servers or rack and stack anything.
It does not come cheap. AWS specifically has egregious networking fees and you end up paying multiple times (AZ to AZ traffic, NAT gateways, and a myriad services that also charge by GB, like GuardDuty). It adds up if you are not careful.
From time to time, management comes with the idea of migrating to 'on-prem', because that's reportedly cheaper. Sure, ignoring the hundreds of engineers that will be involved in this migration, and also ignoring all the engineers that will be required to maintain this on-premises, it might be cheaper.
But that's also ignoring the main reason why cloud deployments tend to become so expensive: they are easy. Confronted with the option of spinning up more machines versus possibly missing a deadline, middle managers will ask for more resources. Maybe it's "just" 1k a month extra (those developers would cost more!). It gets approved. 50 other groups are doing the same. Now it's 50k. Rinse, repeat. If more emphasis would be placed into optimization, most cloud deployments could be shrunk spectacularly. The microservices fad doesn't help(your architecture might require that, but often the reason it does is because you want to ship your org chart, not for technical reasons).
> When one buys a house, they should take a hard loo at whether the default approach of paying for utilities makes sense, versus generating their own power.
Yes, people do. They install solar panels and use them to generate at least some of their own power. Near future battery tech might allow them to generate all of it if they get enough sunlight, in which case this will become a genuine question to answer: how much to install and maintain the panels and batteries over their lifetime, vs expected cost of purchasing power from utilities.
In a similar manner, cloud vs self hosting is a valid consideration that changes over time. We now have docker and similar tools which make managing your own infrastructure much easier than it was ten years ago. I fully expect even better tools will come out in the future so this consideration does change over time. Maybe in another ten years there'll be almost no benefit to using the cloud (except maybe as a CDN).
> In a similar manner, cloud vs self hosting is a valid consideration that changes over time. We now have docker and similar tools which make managing your own infrastructure much easier than it was ten years ago. I fully expect even better tools will come out in the future so this consideration does change over time.
Excellent point. AWS is 21 years old. Docker (essentially the foundation for most self-hosting these days) is 10 years old. I think we're going to see many more self-hosted K8s control planes (as one example). This isn't considering even more modern tools built on these fundamental components that make self-hosting even easier.
When one buys a house, they should take a hard loo at whether the default approach of paying for utilities makes sense, versus generating their own power.
While that's a bit snarky, the reasoning is similar. You can:
* Use "bigcloud"(TM) with the whole kit: VMs, their managed services, etc * Use bigcloud, but just VM or storage * Rent VMs from a smaller provider * Rent actual servers * Buy your servers and ship to a colo * Buy your servers and build a datacenter
Every level you drop, you need more work. And it grows(I suspect, not linearly). Sure, if you have all the required experts (or you rent them) you can do everything yourself. If not, you'll have to defer to vendors. You will pay some premium for this, but it's either that, or payroll.
What also needs to be factored in is how static your system is. If a single machine works for your use-case, great.
One of the systems I manage has hundreds of millions of dollars in contracts on the line, thousands of VMs. I do not care if any single VM goes down; the system will kill it and provision a new one. A big cloud provider availability zone often spans across multiple datacenters too, each datacenter with their own redundancies. Even if an entire AZ goes down, we can survive on the other two (with possibly some temporary degradation for a few minutes). If the whole region goes down, we fallback to another. We certainly don't have the time to discuss individual servers or rack and stack anything.
It does not come cheap. AWS specifically has egregious networking fees and you end up paying multiple times (AZ to AZ traffic, NAT gateways, and a myriad services that also charge by GB, like GuardDuty). It adds up if you are not careful.
From time to time, management comes with the idea of migrating to 'on-prem', because that's reportedly cheaper. Sure, ignoring the hundreds of engineers that will be involved in this migration, and also ignoring all the engineers that will be required to maintain this on-premises, it might be cheaper.
But that's also ignoring the main reason why cloud deployments tend to become so expensive: they are easy. Confronted with the option of spinning up more machines versus possibly missing a deadline, middle managers will ask for more resources. Maybe it's "just" 1k a month extra (those developers would cost more!). It gets approved. 50 other groups are doing the same. Now it's 50k. Rinse, repeat. If more emphasis would be placed into optimization, most cloud deployments could be shrunk spectacularly. The microservices fad doesn't help(your architecture might require that, but often the reason it does is because you want to ship your org chart, not for technical reasons).