Having worked half my career at places with their own data centers and self ran infra, and the other half with mostly cloud based solutions, I have a theory.
Perhaps we are designing far more complicated solutions now to leverage these cloud services, whereas having the constraints of a self operated data center and infrastructure necessitates more ingenuity to achieve similar results.
We used to do so much more with just a few pieces of infrastructure, like our RDBMS's, as one example. It was amazing to me how many scenarios we solved with just a couple of solid vertically scaled database servers with active-active failovers, Redis, an on-prem load balancer, and some webservers (later, self hosted containerization software). We used to design for as few infrastructure pieces as possible, now it seems like that is rarely a constraint people have in their minds anymore.
Amen, I'm becoming an old grumpy engineer on my team for constantly asking why we need yet another <insert cloud technology here>. I'm not against new technology but I am against not considering what we have and how it may already solve the problem without adding wider breadth to our operational surface area. And it's every single damn year now because now cloud providers string their own cloud primitives together to form new cloud services.
How many times I've had this discussion. Let's publish a notification, and let's have the message receiver call some API. Why not just call the API from the place where you want to publish the message? Because we need this SNS message queue.
Probably because the API can be unreachable, timeout, etc — with a message queues it can be redelivered without permanently dropping customer data or whatever with only a stack trace to remember it by
That's naive without any context to claim. You have to know the source that triggers the code to publish the message, what the message is for, the fault tolerance and availability of the API we're calling before you can even begin to decide. Which you validated perfectly by giving a snarky "what about redundancy" answer to a complicated question.
> Perhaps we are designing far more complicated solutions now to leverage these cloud services, whereas having the constraints of a self operated data center and infrastructure necessitates more ingenuity to achieve similar results.
Nothing about "ingenuity", just plainly having some friction in implementation makes for simpler designs.
If you have zero cost (aside from per request pricing but that's not your problem right now, that's management) to add a message queue to talk between components, now that's a great reason to try that message queue or event sourcing architecture you've read about.
And it works so "elegantly", just throw more stuff on queue instead of having more localized communications.
We don't worry about scaling, cloud worries about it(now bill for that queue starts to ramp up but that's just fraction of dev salary, we saved like a 2 weeks of coding thanks for that! Except that fraction adds up every month...).
Repeat for next 10 cloud APIs and you're paying at every move, even for stuff like "having a machine behind NAT". And if something doesn't work can't debug any of it.
Meanwhile if adding a bunch of queue servers would take few days for ops to sort monitoring and backups on it, eh, we don't really need it, some pubsub on redis or postgresql we already have can handle stuff that needs it, and rest can just stay in DB. This and that can just talk directly as they don't really need to share anything else on queue, we just used queue to not fuck with security rules every time service needed to talk to additional service.
it is the classic find a problem to use our solution, or XY problem
As an example I have seen many times people attempt to find a reason to use k8s because the industry says they should instead of looking at what they need to do and then determining if k8s is the best for that application
Our reason was pretty much "clients want to use it". One migrated to it for no good reason whatsoever aside from senior dev (that also owned part of the company) wanted to play with new toys. Other one halfway decided that their admins don't really want to start k8s cluster and just told us to deploy resulting app (which REALLY didn't need k8s anyway) on docker.
Maybe they’re looking for an excuse to gain k8s experience to bolster their resume? If most startups fail, might as well gain some skills out of the current one? Perhaps it doesn’t benefit the startup though, inflating complexity, infra spend, and slowing productivity.
Perhaps we are designing far more complicated solutions now to leverage these cloud services, whereas having the constraints of a self operated data center and infrastructure necessitates more ingenuity to achieve similar results.
We used to do so much more with just a few pieces of infrastructure, like our RDBMS's, as one example. It was amazing to me how many scenarios we solved with just a couple of solid vertically scaled database servers with active-active failovers, Redis, an on-prem load balancer, and some webservers (later, self hosted containerization software). We used to design for as few infrastructure pieces as possible, now it seems like that is rarely a constraint people have in their minds anymore.