You are right, that for most use cases using microservices isn't necessary, but it doesn't mean they can't be still useful. Although there is some added cost in the beginning, providing separation of concerns can be quite useful for security and in later phases it keeps the monolith from growing too big and difficult to maintain/extend.
I see microsevices as return to the Unix philosophy:
I would argue that, unless you are supporting many developers working on the same project simultaneously (as in, hundreds if not thousands), that microservices will actually slow development without improving quality or robustness.
Many things are significantly easier in a monolith. Integration testing, reasoning (and verifying with tests) about how components interact, refactoring of interfaces etc. As soon as you pull components out into microservices many assumptions developers may not even realise they make about developing in a monolith go out the window.
Every microservice you carve out of a monolith gives you at least 2 public APIs you didn't have to worry about before, and makes local development that much more complicated. I had a situation where I needed spin up 20 microservices just to wire up an A/B test for marketing, and everyone kept asking me what was taking so long while refusing to listen to the trade-offs of their request. Good times.
I vote for punting in microservices until the value proposition is clear. Otherwise you just end up with a macrolith that makes you dream of monolithic good old days
Of course, this should have been the approach from the beginning, and it boggles my mind why people pick technologies or paradigms without considering their requirements and the tradeoffs of their technical decisions.
I think part of it is because of the hype machine, where people only talk about how awesome things are that they invented, instead of talking about what problems it solves, what it doesn’t solve, and what its tradeoffs are. If you are reading something to evaluate a technology and it doesn’t talk about all three of those things, discard what you’re reading, because it will mislead you.
I find reasoning about component interest harder in a monolith. Monolith code has free reign to access other parts of the monolith; instead of having to understand a component's inputs and outputs, you have to keep the entirety of the monolith in mind when reasoning through any particular piece of code.
Smaller services are also easier to test, for the same reasons. Services force the team to limit scope. While one can try to do the same in a monolith, it's too easy to "just this once" rely on some back channel data passing or assumption of the internal state of another part of the monolith.
We are replacing our monolith with micro services like we are google, kubernets, docker containers, aws, grpc, node, react. Its gonna kill the company. To many new shiny things at once.
I see debugging as a significant challenge in a micro-services environment. Even with distributed tracing, how do you reason about the order of events across services? We've had trouble separating cause-effect in a monolith let alone a micro-service.
Some of the features here seem to be slightly at odds with security. Like auto discovery. Not knowing a priori what's running where doesn't sound like clear security separation.
I see microsevices as return to the Unix philosophy:
>Write programs that do one thing and do it well.
>Write programs to work together.