Hacker News new | past | comments | ask | show | jobs | submit login

Microservices will add latency because network calls are much slower than in-process calls.

Microservices, as an architectural choice, are most properly chosen to manage complexity - product and organizational - almost by brute force, since you really have to work to violate abstraction boundaries when you only have some kind of RPC to work with. To the degree that they can improve performance, it's by removing confounding factors; one service won't slow down another by competing for limited CPU or database bandwidth if they've got their own stack. If you're paying attention, you'll notice that this is going to cost more, not less, because you're allocating excess capacity to prevent noisy neighbour effects.

Breaking up a monolith into parts which can scale independently can be done in a way that doesn't require a microservice architecture. For example, use some kind of sharding for the data layer (I'm a fan of Vitess), and two scaling groups, one for processing external API requests (your web server layer), and another for asynchronous background job processing (whether it's a job queue or workers pulling from a message queue or possibly both, depends on the type of app), with dynamic allocation of compute when load increases - this is something where k8s autoscale possibly combined with cluster autoscaling shines. This kind of split doesn't do much for product complexity, or giving different teams the ability to release parts of the product on their own schedule, use heterogeneous technology or have the flexibility to choose their own tech stack for their corner of the big picture, etc.




Not to mention, you need an infra team to manage all this complexity - much larger team than maintaining a few vertically scaled servers.

A salary of 3 infra engineers per year $300k, cost to company probably $450k.

For $450k a year, you can get about 500 servers, each one with 128 GB RAM and 32 vCPUs.

Has anyone done this type of a ROI?


I'm not sure if we're on the same page here. When I said "cloud providers make this option cheaper than traditional servers" I meant it as in the pricing structure/plans of cloud providers. That's why I tried to contrive a scenario to make a better point. Meanwhile your definition of cost seems to center on performance and org overheads a team might incur.

You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you. Something you already pay your provider for. So my DA point now is, is it cheaper to pay them to handle this or is it cheaper to shell out your own and manage manually?


> You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you

I actually wasn't talking about serverless at any point - I understand that term to mostly mean FaaS and don't map it to things like k8s without extra stuff on top, which is closer to where I'd position microservices - a service is a combo of data + compute, not a stateless serverless function. But I agree we're not quite talking about the same things. And unfortunately I don't care enough to figure out how to line it up. :)

Org factors rather than cloud compute costs are why you go microservice rather than monolith was my main point, I think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: