Also never underestimate the power of a single bare-metal server. Today everyone seems to be in the clouds (pun intended) and has seemingly accepted the performance of terrible, underprovisioned VMs as the new normal.
Stackoverflow -- the website that every developer uses probably all the time -- is an example of a site running on a very small number of machines efficiently.
I'd rather have their architecture than 100's of VMs.
For those who are discouraged by the massive complexity of Kubernetes/Terraform and various daunting system design examples of big sites, remember you can scale to a ridiculous levels (barring video or heavy processing apps) with just vertical scaling.
Before you need fancy Instagram scale frameworks, you’ll have other things to worry about like appearing in front of congress for a testimony :-)
This is indeed the standard example I refer to to prove my point, and all my personal projects follow this model whenever possible. The huge advantage in addition to performance is that the entire stack is simple enough to fit in your mind, unlike Kubernetes and its infinite amount of moving parts and failure modes.
I share the general HN sentiment over microservices complexity but just to play devil's advocate...
I suspect that server cost in this case is asymptotic. If the (monetary) cost of SE's architecture is F(n) and your typical K8s cluster is G(n), where n is number of users or requests per second, F(n) < G(n) only for very large values of n. As in very large.
In essence, the devil's advocate point I'm making is that maybe development converges towards microservices because cloud providers make this option cheaper than traditional servers. We would gladly stay with our monoliths otherwise.
I tried to contrive a usage scenario to illustrate this but you know the problem with hypotheticals. And without even a concrete problem domain to theorize on, I can't even ballpark estimate compute requirements. Would love to see someone else's analysis, if anyone can come up with one.
Microservices will add latency because network calls are much slower than in-process calls.
Microservices, as an architectural choice, are most properly chosen to manage complexity - product and organizational - almost by brute force, since you really have to work to violate abstraction boundaries when you only have some kind of RPC to work with. To the degree that they can improve performance, it's by removing confounding factors; one service won't slow down another by competing for limited CPU or database bandwidth if they've got their own stack. If you're paying attention, you'll notice that this is going to cost more, not less, because you're allocating excess capacity to prevent noisy neighbour effects.
Breaking up a monolith into parts which can scale independently can be done in a way that doesn't require a microservice architecture. For example, use some kind of sharding for the data layer (I'm a fan of Vitess), and two scaling groups, one for processing external API requests (your web server layer), and another for asynchronous background job processing (whether it's a job queue or workers pulling from a message queue or possibly both, depends on the type of app), with dynamic allocation of compute when load increases - this is something where k8s autoscale possibly combined with cluster autoscaling shines. This kind of split doesn't do much for product complexity, or giving different teams the ability to release parts of the product on their own schedule, use heterogeneous technology or have the flexibility to choose their own tech stack for their corner of the big picture, etc.
I'm not sure if we're on the same page here. When I said "cloud providers make this option cheaper than traditional servers" I meant it as in the pricing structure/plans of cloud providers. That's why I tried to contrive a scenario to make a better point. Meanwhile your definition of cost seems to center on performance and org overheads a team might incur.
You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you. Something you already pay your provider for. So my DA point now is, is it cheaper to pay them to handle this or is it cheaper to shell out your own and manage manually?
> You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you
I actually wasn't talking about serverless at any point - I understand that term to mostly mean FaaS and don't map it to things like k8s without extra stuff on top, which is closer to where I'd position microservices - a service is a combo of data + compute, not a stateless serverless function. But I agree we're not quite talking about the same things. And unfortunately I don't care enough to figure out how to line it up. :)
Org factors rather than cloud compute costs are why you go microservice rather than monolith was my main point, I think.
I can't recall reading much on how going for 'the cloud' or 'serverless' saved anyone money. On the other hand, I've read my fair share of horror stories about how costs ballooned and going for the old-fashioned server/VPS ended up being much, much cheaper.
The main argument in favor of the 'cloud' is that it's easier to manage (and even that is often questioned).
I haven't looked for a while but Plenty Of Fish (POF) also ran on the same infrastructure and the same framework - ASP.Net. Maybe ASP.Net is particularly suited to this approach?
What about interpreted languages? I was taught a Python web server can do $NUMCPUS+1 concurrent requests and therefore 32 1 CPU VM will perform as well as a 32 CPU VM.