The more people use the phrases 'paradigm shift', 'culture change' and 'I feel like" and the less people give actual hard-nosed technical justification for this new architectural mold, the more convinced I become that it is a transient fashion that needs to die already.
This comment is identical to "I feel like it" except it's essentially just "I don't like micro-services." Your earlier comment did not provide any "hard-nosed technical justification" for why you don't like them, it was just "they're brittle" which the GP is arguing that if they are they're not implemented correctly.
What is the hard-nosed technical justification for either microservices being brittle by default or for them not being a realistic way to build a system?
> technical justification for this new architectural mold
The technical justification is easier than the cultural or managerial justification: you can make more granular technical decisions by splitting and merging code across services and servers. This is very powerful.
Microservices aren't going away—they are the core of a successful, scalable distributed system. The hype surrounding the term is definitely loaded with cultural and managerial implications that are rarely addressed, however, and I hope that changes soon.
>you can make more granular technical decisions by splitting and merging code across services
"granular technical decisions" -- can you please give an example of a 'granular' technical decision?
>Microservices aren't going away—they are the core of a successful, scalable distributed system.
It would be more accurate to say that microservices is what you call a system that you made distributed even though you didn't actually have to make it distributed.
> can you please give an example of a 'granular' technical decision?
Whether or not to upgrade an underlying library for a service. How many nodes to give a cluster, what features to prioritize on that node. You can do this with a monolithic codebase but you move slowwww if upgrading one unit implies upgrading others. With microservices, you could even write every service in an entirely different language.
> It would be more accurate to say that microservices is what you call a system that you made distributed even though you didn't actually have to make it distributed.
You don't have to make anything distributed. I think it's more apt to draw an analogy to the Actor paradigm—microserves are discrete actors in a distributed system and can be analyzed independently from the other actors.
All the microservices movement has really brought to the table is "no, you don't need to distribute your work to benefit from the looser coupling". If you're seeing anything more, I suspect there would be significant disagreement in the movement about it.
>Whether or not to upgrade an underlying library for a service. How many nodes to give a cluster, what features to prioritize on that node. You can do this with a monolithic codebase
Right. And it's no harder, either.
>you move slowwww if upgrading one unit implies upgrading others.
Absolutely not. You don't have to have microservices to have a set of loosely coupled libraries.
A reliable automated test suite is what is going to make library upgrades less painful for you, not 1,000 new, brittle serialization/deserialization layers.
>With microservices, you could even write every service in an entirely different language.
Really, really bad idea. Debugging a call stack is an order of magnitude easier than tracing a failure across a set of multiple services which is an order of magnitude easier than tracing a failure across a set of services written in different languages.
>You don't have to make anything distributed.
Some systems do have to be distributed for the ability to call on more resources than one machine can realistically provide, to deal with the inherent unreliability of individual servers, or (sometimes) for legal reasons. There are a great number of problems that you create for yourself if you do this, though, which is why avoiding it where possible and minimizing the pain of doing it is the architecturally sane thing to do.
"Microservices" is the 'paradigm' that you should create this headaches for yourself because Martin Fowler Told You To and because the idea that separation of concerns and loosely coupled libraries won't happen unless you do the technical equivalent of self-flagellation. The unfortunate reality is that it is just as easy to write tightly coupled code with microservices, it just creates a bigger headache for you when you do.
>All the microservices movement has really brought to the table is "no, you don't need to distribute your work to benefit from the looser coupling".
It seems to me that the microservices movement implies the direct opposite of this.
Okay then. Let's play the "hard nosed technical justification" for this new architectural mold. Specifically pointed at your issues.
> The serialization/deserialization layer adds potential for bugs.
Adding an agreed upon layer that aligns with clean architecture[0) will actually produce less bugs both upstream and downstream. (This is why contract apis are important) and are actually more valuable than monolithic systems where there is a sincere lack of data and state guarantees. (clean understanding vs black box mentality)
> The massively increased number of services all need constant monitoring.
It's the same amount of monitoring. It's just horizontally scaled and is actually easier to pinpoint issues. This also allows you the ability to change or create circuit breakers to fail over when issues targeting specific are found. (see slides 134-150 of Sam Newman's principles of microservices [1])
> The massively multiplied error conditions (what happens when service 37 talks to 39 and 39 times out?) need to be accounted for and those edge cases must be ironed out.
They should have been "ironed out" well before this point to begin with. If the application fails due to infrastructure issues, it is actually an application design failure that should have been identified and resolved previously via TDD and failure state testing. Graceful failure is always desirable, and having a legacy monolithic application does not prevent infrastructure/connectivity failures. (for reference, check out Martin Fowler's "The 12 Factor App" [2])
> You get fuck all except migraines caused by the brittleness of your app.
What it sounds like you're saying is:
"Building a monolithic app allows you to crutch bad behavior and poor design without having to understand the realities of software. You can ignore scalability, clean architecture, concurrency, data modeling and graceful failures."
I can understand how moving to a microservice could seem very painful and have a brittle result without the depth of understanding these realities. This is specifically why I noted it is a required paradigm shift for many developers.
For more inspired reading, you may want to checkout Gilt's discussions on moving to microservices.[3]
"Building a monolithic app allows you to crutch bad behavior and poor design without having to understand the realities of software. You can ignore scalability, clean architecture, concurrency, data modeling and graceful failures."
I 100% agree with you. I have found that monoliths, no matter how militant the architect is, will eventually have bad behavior slip in. It is the path of least resistance/junior programmer where someone will eventually shortcut the interface.
In micro services there simply are fewer ways to cheat. The only access a service has to another service is through the defined interface. The service providing that interface must also now support it as is going forward - the opposite of brittle. As you mentioned above, when dealing with another micro service the programmer must think about failure all the time, which is what they should have been doing anyways!
Thinking in micro services from the start is not that hard, but untangling a monolith can be very hard. Micro services came out of people realizing that huge monoliths are impossible to change because over time all the bad practices that crept in make them impossible to reason about.
There's value in enforcing separation of interface and implementation. But tooling can do that without needing the hair-shirt of putting a network connection between them.
>Adding an agreed upon layer that aligns with clean architecture[0)
Creating new boilerplate never aligns with clean architecture. Clean architecture means solving your problem in the least amount of code possible, not the the most.
>monolithic systems where there is a sincere lack of data and state guarantees.
If anything you get fewer guarantees about your library if you expose it over a REST api instead of putting it directly on your call stack. Your library can actually go down and you have to prepare yourself for this eventuality and write a ton of code to deal with that.
>It's the same amount of monitoring.
25 services require 25 checks. 4 services require 4 checks, with all of the attendant graphs, notification systems, etc. There is simply no way to wriggle out of that one, no matter how great you think this architectural model is.
> This also allows you the ability to change or create circuit breakers to fail over when issues targeting specific are found. (see slides 134-150 of Sam Newman's principles of microservices [1])
This is a fix for a self-created problem. If I create a library to add two numbers together and create an API endpoint for it, I need a circuit breaker for it. If I instead just call it directly, I don't.
>They should have been "ironed out"
Of course they should be "ironed out" if you follow a microservices architecture! You have to do all of this self created work or your app becomes massively unreliable. The point is that you can avoid these problems altogether if you KISS and don't follow a microservices architecture.
>What it sounds like you're saying is:
>"Building a monolithic app
Firstly, there is nothing intrinsically monolithic about running two libraries on the same computer, in the same service. Loose coupling has nothing to do with whether you are running two different bits of software in the same service. It simply means that the amount of work you have to do to swap out a component is low.
I currently work on a "microservices" system and it is tightly coupled as hell and a monolith.
I've built non-microservices apps that are extremely loosely coupled.
This intentional conflation of the word 'monolith' and "non-microservices" is disingenuous as hell.
>allows you to crutch bad behavior and poor design without having to understand the realities of software.
If "not doing microservices is a crutch", strong typing is also a crutch.
As it happens, I don't need the "crutch" of creating rest API endpoints between my libraries to make them loosely coupled. I do it anyhow. If I were to work with somebody who doesn't write loosely coupled code, though, I certainly wouldn't want to magnify the explosions caused by their bad code by making it tightly coupled over a series of interconnecting, networked services.
>You can ignore scalability, clean architecture, concurrency, data modeling and graceful failures."
I prefer to make scalability, clean architecture, concurrency, data modeling and graceful failure easier to handle, not harder. I can do that by using a (where possible) stateless, non-microservices architecture.