Microservices are not a solution, they're a capability. It's powerful for a team to be able to deploy a tiny service with the absolute minimum amount of explicit plumbing to meet operational requirements. Whether they should break their system up this way is a case by case judgment.
Every place I've been, the costs of microservices get overlooked in favor of the illusion of decoupling. Microservices will absolutely make simple features fully contained within a service easier, but as soon as a feature spans services or even impacts the contract of a service, you're in for more pain than in a monolithic architecture. Microservices sieze local simplicity at the cost of integration complexity.
Microservices, as a philosophy, is encoding your org design at the networking layer. I hope you get it the factoring right the first time, because it's going to be painful to change.
I'm all for small services where they make sense. But just like the advice that "all functions should be small" microservices has been myopically adopted as a design principle, rather than a debatable decision.
At a previous company. I had a team that chose to rebuild a feature from the monolith as a microservice to "contain" the complexity. They built the service off to the side, had automated tests validating its inputs and outputs, and they were quite proud of it. Then came integration, and it became clear they had not put any thought into how to migrate the data, what would happen if the service was unavailable when the monolith called it, how to generate reports across the monolith and microservice, etc etc.
In this case, using microservices was like getting drunk: a way to briefly push all your problems out of your mind and just focus on what's in front of you. But your problems didn't really go away, and in fact you just made them worse.
Indeed. I did simple calculation for myself. We kind of split a service in 5 microservices. Now if I look at e.g 5% integration related issues between 2 services, which would obviously not exist in monolith, we are actually looking > 20% chances of error in an end-to-end transaction.
To me this is pretty high chance of error on day to day basis. And in our case some errors are convoluted, so possibly getting missed and causing slow data corruption.
I know microservices have passed the excitement phase of the hype cycle, but they're not useless. One of the most enjoyable systems I've worked on was a microservice architecture.
That said, I'm going make some wild inferences about what you were getting at in ordet to say that I agree that a microservice architecture is probably a solution in search of a problem in most cases. And, even in the cases where it is a good option, I can see all sorts of ways to mess up the implementation. The article is right; the trickiest things to get right about microservices are actually organizational issues, not technical ones. My hot take is that dev teams who are considering adopting microservices should take a serious look at how much ability they have to influence the org chart and inter-team and inter-departmental communication. If management is strictly something that happens to them, I would not give them stellar odds of achieving sustainable success with microservices. Perhaps some other form of SOA, but not actual microservices.
I wouldn't underrate the technical issues. To really make it work, you've gotta get the tooling right, so you're not writing and maintaining plumbing (logging, tracing, runtime config, networking, deployment, API layer, repository, etc etc) for every single microservice, all in their little bespoke ways. I once decommissioned a microservice after noting that the LoC of all its plumbing and config outweighed the actual logic 5:1. Pushing that plumbing to the background, so it can be managed as a cross-cutting concern, isn't so easy.
> Hearing some places just have one giant database where all teams can add and remove columns is... Kind of scary to me.
Are you talking about Bank of America? My impression from the interview was that it is not a free for all but rather all teams are equal but some teams are more "equal" than others (compliance). I still don't get why they need write/execute access. I would be OK with other teams having read access to my database.
There was a discussion the other day about using views as an external interface to a "service" so basically we maintain rest, gRPC, or whatever but if other teams just need read only access to our data, we can expose a view to them
Assuming people will need reports that cut across team/microservice boundaries, all that data has to end up in one common place, possibly a data warehouse. At which point, other teams can still add and remove columns and break your reports.
Well, a data warehouse is one thing. I have heard of them just having a single live database that everyone hits. For all uses.
Edit: in contrast, I don't want anything about a user, except their id in a table for most features. Hearing of free for all's on data makes me question how they protect payment or just general private data.
> Microservices, as a philosophy, is encoding your org design at the networking layer.
Not always. One of the things where I desperately wish people would adopt the "microservices philosophy" is in applications which provide a scripting language.
For example, if I want to "script" OpenOffice, I am stuck with the exact incarnation of Python shipped with OpenOffice. Nothing newer; nothing older; exactly binary compatible. This is a really irritating limitation.
If, however, they simply provided a "microservice interface" that anyone could talk to rather than just the anointed Python, then you could run your own Python or script using a completely different language.
I'm picking on OpenOffice here, but this is not specific to them. Nobody who has a "scripting extension language" as part of their application has demonstrated anything better.
That's how a lot of Windows stuff and MS Office stuff is built - as (D)COM components you can arbitrarily connect to from anything that can speak COM.
It's also how Windows Scripting Host operates (it also makes the language interpreters into COM objects, so you can extend the list of available languages yourself)
One should probably not even think of microservices if one employs fewer than 100 programmers. In many cases microservices are introduced because of fashion and/or resume driven design not because it actually makes sense. I prefer refactoring a poorly structured monolith to refactoring poorly structured microservices in every case.
The whole problem starts with calling them microservices. Just call it a service oriented architecture please. The whole micro thing is already an indication of the kind of trouble you are getting yourself into. More respect to the monolith please.
Well, a service-oriented architecture is a more general category then microservices and has been around much longer. In many cases it is much more justifiable on technical grounds. Just to give one example. You have an API that is not the most stable thing in the world and when it misbehaves you do not want it to take the whole application down with it. So you put it in a separate service that can easily be restarted. That is a good technical reason for a separation of executables. Problems arise when people introduce these separations for no good reason. Then one gets all of the problems of RPC and none of the benefits. I.e., one is making things more complicated for no good reason.
This is key. Recently I've become a bit obsessed with naming, and not just because naming needs to be consistent and coherent - how you name something in software dictates how you will think about it, what concepts you will relate to it, etc. The name of one component can affect your entire architecture. I'm not sure how to validate it, but I believe that naming choices are almost as important as application architecture because of their effects on your thought process.
I couldnt tell if you were serious or joking, or a mix of both. I think in part because I had the same reaction when I heard about microservices. I thought:
"Uh... we had that back in the early 2000s at least, they called it SOA back then."
And yes we learned back then how painful and complex that kind of architecture was to reason about and support, compared to a simple monolith.
Monolith is truly king unless obviously you're at FAANG scales. 99%+ of shops are many orders of magnitude below that scale.
My favourite phrase here is "distributed systems", and that includes any system with discreete components that communicate over a network internally (i.e. consuming a third-party API does not count; obviously there would always be some gray areas).
And yes, a simple SPA is a distributed system, even if the backend is a monolith.
Another phrase I like to use is 'vertically integrated service'. You need to be vertically integrated to scale horizontally. That is the whole point of microservices after all.
A coworker at a previous four person startup was always advocating microservices.
Having to push back on that over and over was frustrating.
We didn’t even have a real devops guy, or a vpc with properly partitioned CIDR blocks to segregate our databases from the public web and we’re going to start adding the complexity of a microservice architecture?
For what! We didn’t even have _users_ yet.
But try to get folk to dogfood our application since we had no actual users besides the founder and it was like pulling teeth.
To me microservices should not be an architecture.
You can have pieces of functionality in a monolith that make sense to scale independently, and those should not be micro, they should be meaningful pieces of functionality that justify the overhead of spinning them out.
In a way your comment reflects this, a lot of the places that justified microservices were at a scale where their "microservice" was serving more requests than the average company's entire codebase.
It's "big data" with 10 GBs of logs all over again.
Things like message queues and asynchronous events are available for your favorite programming language for in-executable use. You can then do things like running them on a thread pool that has the same number of threads as your machine has cores.
Not necessarily. That one can have these tools in-application doesn't mean one cannot combine this with some form of RPC as well. Especially whether one needs load balancing or not seem to be a completely orthogonal issue.
On the other hand, why not a monolith and single instance? If one uses a performant language one would be amazed how much can run on a single machine. If one expects growth one should have some plan to scale, sure, but if one does not go for shenanigans like running queues and the like between applications using some form of RCP one might get amazed how much your single monolith can actually do. All this extra networks stuff and so on is not exactly free.
Without single instance, an in process queue doesn't provide the same kind of functionality that a shared queue does and would probably introduce split brain kind of problems, no?
Right, you’re taking it as a suggestion, as in a positive nudge to single instance monolithism. However, I read it as merely making GP aware of other options that are on the table.
Splitting a system into microservices can help individual teams be better stewards of their part of the system. They can release on their own schedule, they can use their own linting rules, heck even a different language, and they can have better control of incoming code changes. With a monolith any random developer can go and flip some private method to public, import it way across the modules, and presto you are now building a ball of mud. Need an out of cycle release? Hopefully you have CI/CD or now you have to beg the SRE in charge to do it for you.
> Splitting a system into microservices can help individual teams be better stewards of their part of the system.
Team being the keyword here. If you have 3-5 developers per microservice, you're absolutely okay. If you have 3-5 microservices per developer, that's when it gets ugly.
> With a monolith any random developer can go and flip some private method to public, import it way across the modules, and presto you are now building a ball of mud.
If "any developer" can do it, then the problem is not with monoliths, but rather with your pull request reviews process and lack of code ownership. This can even be helped by Github and other platforms with a CODEOWNERS file.
Microservices by themselves doesn't solve this problem. If anyone has commit access to all micro services, then they can make an even bigger mess, the same way.
I haven't had experience with well isolated modules in a monolith so I'm likely not giving monolith enough credit here. It's possible a microservice organization may still have an advantage by having more of this isolation by default, instead of an uphill battle to get things set up properly.
If you are already doing code reviews, it is trivial to setup.
If you’re using microservices + monorepo, you need exactly the same thing, otherwise people will just commit in your service.
If you’re relying on multiple repositories, you can just split things within libraries and have the exact same effect. You can even have separate deployments with this arrangement.
Not to mention it is also an “uphill battle to get things set up properly” with microservices, so there’s really no advantage in using them for enforcing encapsulation.
But most important: a team unable to enforce encapsulation and code ownership is not ready for microservices at all.
Yes, that is the marketing brochure pitch for microservices.
In the real world, poorly designed microservices make the ball of mud problem much, much worse, and whatever pain you had in deployments in a monolith are now magnified ten-fold. I have not been fortunate enough to see well-designed microservices, so I suspect the ball of mud is the default. Can this be rectified through discipline? Probably, but I haven't seen it.
Asynchronous? Doesn't save you, when an upstream service changes event definitions and emits events with unexpected structure, and a downstream service starts failing. Can this be rectified through Async OpenAPI and rigorous contract testing? Probably, but I haven't seen this happen in a way that helps.
I have seen large companies survive perfectly well on a monolithic ball of mud, and small companies get lost in a mud pit of microservices.
My point is not that microservices are bad, they're not, they're just a tool, but they are a tool that is a poor fit for most companies, in my opinion. I can't speak to the few giant companies that need them and use them effectively; there is a good reason the tool exists.
> In the real world, poorly designed microservices make the ball of mud problem much, much worse
I can see that. I am trying to reflect on why I have such negative experience with a monolith and positive with microservices. It's possible that in the monolith setups I had, poor design was easy by default. No linting, no cross-module ownership interlocks, very slow and costly production deploys. Likewise the microservice setups tended to make poor design harder since code is isolated by default both at compile and runtime.
Can you make a monolith with all the good benefits of modularity but without the complications of network RPC etc.? Maybe, but I haven't seen it yet. (Excluding trivial single-team apps; talking about at least 3 teams and 50+ headcount orgs).
> Asynchronous? Doesn't save you, when an upstream service changes event definitions and emits events with unexpected structure, and a downstream service starts failing
Sure, shit happens. Again my experience may be colored, but when costly mistakes happen in monoliths, it tended to take longer to roll back because of how deploys are structured both technically and on an organizational level. I think I would still prefer smaller units in this case.
> There's nothing about writing software in different languages that necessitates separating functionality with HTTP calls
I mean it's only computers and the only limit to what we can make them do is our imagination.
In this case though for the sake of argument what options would I have if I, say, needed to let a remote team add some functionality to my, say, Spring backend but they really prefer to write C# and have their own CI/CD system. I'm not sure how I would accomplish this in a monolith.
> I'm not sure how I would accomplish this in a monolith.
With a library.
Depending on what you’re doing (process ran a few times a day?), maybe even spawning a process is enough.
> what options would I have if I, say, needed to let a remote team add some functionality to my, say, Spring backend but they really prefer to write C# and have their own CI/CD system.
That sounds like you need at most two different services, not microservices.
I don't know what that library is right now. Meanwhile I could set up the separate repos/microservices by the end of the day.
> That sounds like you need at most two different services, not microservices
It is unfortunate that the "micro" in "microservice" is often misunderstood to expect that these are very small and granular components. In practice by and large it ends up being separate services with separate repo, code review, ownership, deploy or CI/CD etc. pipelines. It doesn't have to mean they are actually very small. I know people like to joke about npm components and the leftpad thing but in my experience microservices have not turned out like that.
Are you serious? In which language do you program that you don't have to use multiple libraries daily? You just talked about NPM.
In your example, the C# could output DLLs that have functions that can be called by the Java code. This can give the same encapsulation you get with your multiple services example, including separate deployments, different repo, different language, etc.
You could also use different processes. Please tell me you know what an executable is...
> Meanwhile I could set up the separate repos/microservices by the end of the day
That's because someone set it up for you, not because it's faster.
In your hypothetical scenario of "I needed to let a remote team add some functionality to my, say, Spring backend but they really prefer to write C# and have their own CI/CD system" you didn't mention that "…but we already have a fast way of setting up micro services".
> It is unfortunate that the "micro" in "microservice" is often misunderstood*
It's not, you're using the term loosely. "Two services" do not make a micro service architecture.
Sorry to be blunt, but I'm getting the feeling you're not talking out of experience, but rather repeating popular talking points.
> Are you serious? In which language do you program that you don't have to use multiple libraries daily?
I said I don't know which library would let me make C# method calls from a Java Spring runtime, presumably in the same process. Do you? Feel free to share a link.
> That's because someone set it up for you, not because it's faster.
Yes, the whole point was to come up with a concrete example for the sake of argument, and see how we can accomplish it in practice. We're not writing academic papers here, the fact that this functionality is widely available is a good thing.
> It's not, you're using the term loosely
Yes, me and 95% of the industry. Again, it's unfortunate but here we are.
> I said I don't know which library would let me make C# method calls from a Java Spring runtime, presumably in the same process. Do you? Feel free to share a link.
You don't need a library for that, it's is part of the language. Java can call code from native libraries, C# can export native functions. It can also be another process. [1]
> Yes, the whole point was to come up with a concrete example for the sake of argument, and see how we can accomplish it in practice. We're not writing academic papers here, the fact that this functionality is widely available is a good thing.
The problem is that you moved the goalposts. "I need to interoperate with C# code" is very different from "I need to interoperate with C# code and my company uses microservices and that's the only thing I know".
You can't claim something is automatically worse just because you're ignorant about it.
> Yes, me and 95% of the industry. Again, it's unfortunate but here we are.
Nope. There are exactly zero companies claiming to use microservices when they have exact two services. And it's not about size, by the way, it's about different architectural patterns.
> You don't need a library for that, it's is part of the language. Java can call code from native libraries, C# can export native functions. It can also be another process.
Having never done this, I believe you. I am still skeptical of how quickly/effectively you could set up a CI/CD system and the module repos to deploy to a monolith. I wonder how the type interop works there (equivalent of OpenAPI-based code generation).
> The problem is that you moved the goalposts. "I need to interoperate with C# code" is very different from "I need to interoperate with C# code and my company uses microservices and that's the only thing I know".
That's maybe a bit unfair. There are a lot of microservice shops, but I haven't heard many who do this Franken-monolith thing. Remember that infamous microservice dependency graph from Uber? I guess Facebook was famous for doing a monolith but I don't know if it was purely PHP or not.
> You can't claim something is automatically worse just because you're ignorant about it.
I can if the metric is availability/familiarity in the industry, but OK.
> Nope. There are exactly zero companies claiming to use microservices when they have exact two services. And it's not about size, by the way, it's about different architectural patterns.
I don't know, my experience is people like to say they use microservices but they just have a service-oriented architecture. Meanwhile I never ran across an organization that does something like npm modules and goes way crazy with granular services. I don't know, maybe they are out there.
> That's maybe a bit unfair. There are a lot of microservice shops, but I haven't heard many who do this Franken-monolith thing. Remember that infamous microservice dependency graph from Uber? I guess Facebook was famous for doing a monolith but I don't know if it was purely PHP or not.
Your original example was that you "needed to let a remote team add some functionality to my, say, Spring backend but they really prefer to write C# and have their own CI/CD system". I am telling you the classical way of solving this problem. You don't get to change the constraints ("I already know how to setup X") after an answer just to say the answer is wrong/worse and expect not to be called out on it...
Also: It's not a Franken-monolith, dude... it's just a library. Have you ever used ImageMagick to resize images from your microservices? Maybe made a request using libcurl? Maybe had to use a SQL driver that was native, maybe Redis? A NPM package with native code? That's exactly the same thing... I literally don't know of a single language that doesn't use those things...
Plus, there is a HUGE difference between what you asked and a Uber/Facebook scenario. You seem to be under the impression that what I'm suggesting is a replacement for real microservices. No, it is simply a solution for the problem you described.
You described a problem and said you don't know how to solve other than with X. I'm giving you Y. I don't understand what's the issue here.
> I can if the metric is availability/familiarity in the industry, but OK.
Sure, but you can't use yourself as a benchmark for the industry if you're that unfamiliar with something that basic.
> I don't know, my experience is people like to say they use microservices but they just have a service-oriented architecture. Meanwhile I never ran across an organization that does something like npm modules and goes way crazy with granular services. I don't know, maybe they are out there.
Again, there is a humongous difference between "we're doing microservices" or even "we have a service oriented architecture" and your "I need two things to interoperate".
> You don't get to change the constraints ("I already know how to setup X") after an answer just to say the answer is wrong/worse and expect not to be called out on it...
I don't see it as being called out, it's a constructive discussion. The purpose of my example is to see how practical it would be for me to set up a monolith-based CI/CD that could support multi-lingual, multi-repo teams. I have a better idea now, maybe have them publish to a local Maven style server, and kick off a monolith build when there is a new library version? The monolith can then bind them together and rebuild. I'm not sure how the host interfaces would get regenerated still.
Good examples re: libcurl, ImageMagick, etc. How would we deal with very frequent changes there? I know that upgrading a SQL driver is not a task we take lightly, and I would love to see how we could make this easy to do on a daily or hourly basis. Mainly we'd need auto-generated strong types somewhere on the interface, again, like OpenAPI but for libraries.
> You seem to be under the impression that what I'm suggesting is a replacement for real microservices.
This discussion started with several suggestions that monoliths could do just as well as most microservice setups, and I wanted to do an exercise to see how practical this would be.
> I don't understand what's the issue here.
There is no issue. Who said there is an issue? So far so good.
> Sure, but you can't use yourself as a benchmark for the industry if you're that unfamiliar with something that basic.
I know and/or interview enough people across the web-dev and game-dev industry that I feel I have a sizable sample of data points to form an opinion that is not purely a guess of a single person.
> Again, there is a humongous difference between "we're doing microservices" or even "we have a service oriented architecture" and your "I need two things to interoperate".
Sure. The two things example was just a distilled example for the purpose of a thought experiment. We can easily extrapolate to an org with tens of services where maybe most people like to use the default environment like Java/Spring but a few teams maybe prefer or their use case calls for something different like Python or C# or whatever. If anything it would be even more difficult to stand up a monolith there with more and more varied components.
Updating SQL drivers, etc, is not trivial because you don't have control over the code, but if you have enough integration tests (or at least proper failure handling) for your separate library then you can update with the same frequency as microservices.
I think it's fine to do this thought exercise, but you're massively discounting the difficulties of microservices, while making up obstacles and mocking ("Franken-monolith") every other suggestion. Any company that doesn't already use microservices (and doesn't also have people with experience building infrastructure for them, rather than just using) is gonna have a harder time with a microservice solution than you currently do. It is perfectly fine to extrapolate from your own experience, but you're discount things without any research whatsoever and saying "nobody knows about it" or "nobody uses it" in a very disrespectful way. Another post of this discussion about code ownership is exactly the same: it takes literally seconds to write a simple codeowners file in GitHub, while you make it seem like it would be an Sisyphean task ("an uphill battle") compared with microservices just because you're not aware of it. Some things are not hard, maybe you just didn't heard of them yet.
If you don't even have CI/CD yet I'll argue that your team has not reached the operational sophistication required for microservices yet. There are many ways in which microservices
Also if individual devs can reach out across the codebase and turn private methods public, it is the pull request review procedure that you need to improve, not the architecture.
In process queues don't offer the same capacity smoothing as a shared queue. I don't think most of those queues offer persistence either. They're really not equivalent, are they?
This doesn't match my experience at all. In particular, in the case where a small team is working on a series of small projects it can make a lot of sense to split up the components so that any one of the small projects can be built with one or two existing generalized components along with some custom machinery and some glue code to bring it all together. That way development of all of the small products can contribute to some well honed components that get shared by multiple products and development cycles.
It seems like this is not coming up because the most common context of application is one or more teams working on a single large project probably forced to grow as fast as possible because of the funding structure involved. Increasingly, though, there are companies doing software development without the need for focus and scale that is so common to venture capital powered groups.
The major difference is whether the small projects are independent applications that use some shared libraries (what I think you mean by "existing generalized components") or whether the small projects all talk to each other to create a single large application.
If it's the former then you're just talking about refactoring functionality into a shared library, but at the end of the day you're still just building little monoliths. You don't have to worry about most of the problems that come up with microservices.
It is worth thinking about them before you need them because by the time you gain the skill and infra needed to support them, you’ll have scaled so large that your ability to coordinate and communicate will be overwhelmed.
I suppose what I really mean is: If your leadership subordinates (especially new joiners) are starting to struggle to coordinate, consider the possibility that you might already be at that point yet emotionally attached to the notion of staying small as you double or quadruple in population.
I can guarantee you that most systems out there could serve production traffic on an old laptop, with well-written code and lean database queries. I have done things in MySQL v3 in 2006 that should not be done with a modern database even today (a taxonomy search engine 3 levels deep. Never again).
In the age when database trips and network trips are treated as "free", we somehow arrived at MORE complicated solutions, like microservices.
A "hello, world" problem in 99% of the cases is a "hello, world" problem. At Google - it is not. It's a scale problem. Everything is a scale problem at Google first, and a business logic problem second. The FAANG alumni has convinced the new generation of developers that everything is a scale problem.
First of all, we called these "distributed systems". We reserved those for rare cases where scale was THE requirement, and you jumped into it fully expecting to have no life.
The world did not finally "crack" distributed systems. Most companies created multiple microservices, putting their small dev team underwater because "this is the way at the FAANG".
All of your DRY principles are out the window, and now you have to debug in production - the new word for that is OBSERVABILITY.
Not to mention that the actual reason for distributed systems is to scale multiple parts of the system independently, but you need to KNOW what has to scale individually before you do it. What I see is that the topology really reflects the company structure, of course. It's not "what has to scale separately", it's "team Y is working on X, and team Y does not want to talk to team Z, so they will create a service to make sure they don't have to talk to people".
Except that this is a giant self-own. We all still have to talk to each other, like, a LOT, because things just keep breaking all the time and no one knows why.
Dropbox, Instagram, StackOverflow - these companies are largely monoliths to this day. You thinking that your small outfit needs to be like Google is highly arrogant.
And don't get me started on the amount of money, people, CPU cycles, and CO2 emissions wasted on this solving of the problem most people don't have.
Microservices means just that: tiny, single purpose services that deploy and scale on their own.
I often see appeals to Conway's Law when discussing microservices, but teams don't organize themselves this way. Instead, teams work on a macro services: the email delivery team, or the monitoring team, or whatever. In most cases these macroservices would be best implemented and deployed as a monolith, and then presented to the outside world over a reasonable API.
^ This. Microservices are introduced because it seems like they'll be able to decouple and scale well, and then they make mistakes that make everything even harder than if it'd stayed a monolith.
Usually they don't plan for how they'll coordinate their work, and that leaves gaps in the design, and puts more risk on the business.
Team A:
↑ Email product:
__|_____________ ↑ Service C <----------
/ How do we \ | Service D <-- \
| work together | --------------------- \ ------\--------------
| on overall | | How do we manage | | | How do we |
| system design? | | stakeholder risk? | | | coordinate changes? |
\_______________/ --------------------- | \--------/----------- /
| | | |
↓ | \ |
Team B: ↓ | |
"Data" team: | /
Service A <---|-------
Service B <--/
On top of that, they don't even make a true microservice. They start directly calling into each others' data rather than interface at an API layer, they make assumptions about how each other works, they don't do load testing or set limits or quotas... and because none of them understand the rest of the system, they don't see that their mutual lack of understanding is the cause of their problems.
Even with multiple teams, if they're forced to work inside a monolith, there's a much better chance they will by accident come to understand the rest of the system.
Total tangent: very nice ASCII diagram, especially for a throwaway account. It's so unusually nice that it'd probably help identify the author if they ever made ASCII diagrams on their non-throwaway account (and who could resist?).
Start with a monolith and only break it up if you are actually forced to by the computer (i.e. the process won't fit in ram anymore or eats all the CPU/IO). The second you break up your monolith, you lose its most powerful feature - The direct method invocation. The amount of time I see developers spending on JSON wire protocols, CORS problems, API endpoint designs, et. al. really is starting to concern me. I sometimes wonder if anyone wants to do any actual work or if this is just a big game to some people.
I did the full trip on this microservices rollercoaster. Monolith => uServices => Monolith.
I used to vehemently advocate for using microservices because of how easy it would be to segment all the concerns into happy little buckets individuals could own. We used to sell our product to our customers as having a "microservices oriented architecture" as if that was magically going to solve all of our problems or was otherwise some inherent feature that our customers would be expected to care about. All this stuff really did for us is cause all of our current customers to walk away and force a re-evaluation of our desire to do business in this market. All the fun/shiny technology conversations and ideas instantly evaporated into a cloud of reality.
We are back on the right track. Hardcore monorepo/monolith design zealotry has recovered our ship. We are focused on the business and customers again. The sense of relief as we deprecated our final service-to-service JSON API/controllers was immense. No more checking 10 different piles of logs or pulling out wireshark to figure out what the fuck is happening in-between 2 different code piles at arbitrary versions.
I'm within spitting distance of the end of a project of collapsing a service-oriented system back into a Majestic Monolith. Every step of the way has reduced the lines of code, fixed bugs, saved money, saved time. It's been such a joy that I'm considering doing only this as a side hustle. "Saving" companies who were sold an over-complicated dream.
> It's been such a joy that I'm considering doing only this as a side hustle. "Saving" companies who were sold an over-complicated dream.
This has crossed my mind a lot lately. I think we are looking directly at one of the largest emerging markets in technology. What do we think the TAM is going to be for undoing webscale monstrosities by 2025? Not every business will fail due to their poor technology choices and will be able to pay some serious consulting fees...
I've practically got a system for doing this now. It mostly starts with domain modeling in excel and all the business stakeholders being in the loop at the same time until everyone agrees. I find if you get this part right it doesn't really matter if you use C# vs python, or AWS vs on-prem to build the actual product. Hard to get opinionated and locked in when your deploy to prod involves a 100 meg zip file and 3 lines of powershell ran against a single box.
I have found one other use case: limiting the blast radius of ops and deployment issues. As an example, I once was working with a large telecom client who had their AAA web service (yes, don’t roll your own auth, but this client was big enough and had the right expertise on staff to do so) in the same monolith as their account management web service. The account management service saw active, frequent development to support new functionality while the AAA code only got updated a couple times a year.
Why touch a critical web service relied upon by literally every product you have when you don’t have to? Your business still functions if account management is offline; but not when authentication is offline. Even short outages to auth were unacceptable (millions of customers) and any updates had to be performed in constrained windows due to the criticality of the service.
So we cleaved off the mission-critical parts, stuck them in their own repos to be versioned independently, which let us move faster on the account management work since we could confidently deploy code that wasn’t 100% working because we didn’t need to wait for a maintenance window.
Fundamentally, I don't really see much difference between monoliths and microservices.
In a monolith, you just call another function/class, but in microservices that function is a http call. I guess the benefit of microservices is the ability to independently scale different microservices, being able to choose different languages for different microservices and less conflicts in the repo as more people work on it, but you still have to deal with backwards compatibility and versioning of endpoints.
I think lambdas are interesting when you look at it this way. A microservice is essentially a set of functions which is constantly deployed as one unit. But with Lambdas, each function is a single unit that can scale independently.
There’s a huge difference between http and function calls. When I call a function there is no chance it fails because the function ‘cannot be reached’. With services there are endless reasons why one service is unable to communicate with another. It’s a huge amount of overhead to build a system to handle inter-service communication and failure that doesn’t exist when only calling a function in the same binary.
And, you add latency issues and overhead. With a function call you pass two int and get a string of length 120 (how much memory is this? little, very little), with microservices you make an api call using http with it headers and get a json, and maybe you could need 2 api calls. For me it's easy, just create a monolith, when you become facebook or netflix, split symfony services into microservices (just an example). Oh, and don't forget the developers who translate a specific function into an endpoint, you need a change in the caller and you must change the api!
I agree, but perhaps I didn't word it the right way in my original comment. With microservices there is a big overhead in development and managing failures, but functionally it doesn't offer much more than a function call within a monolith.
If execution of your binary is stopped some time between the call and return (with following restart), dealing with it is no simpler than dealing with network partitioning.
It is. Because then you know that the caller also must have failed. Whereas with an http call, you might want to retry, log the error, fall back to a cache, etc. etc.
This is ignoring the fundamentally-different issue that was raised, which is that the service boundary introduces a new failure point that doesn't exist in a function call. What you're describing could just as easily happen in a service call, IN ADDITION to the service just not being reachable.
It most certainly is easier to deal with the in-process version. I'm specifically referring to database transactions, which are much harder to have available in a distributed context. That failed network call complicates your life so much.
Plus, in the current day and age, we still don't have that many convenient ways to make two systems interact over a network. REST and things like GraphQL don't map well to actions, whereas RPC solutions like gRPC also involve a certain amount of boilerplate code and you still need to think about how the concerns above.
There's flexibility in the ops/deployment side that you don't get with simple library calls. You can take down a specific service cluster and keep your app running with slightly degraded functionality.
The blast radius on deployments can be smaller.
You've already done the work to build up IPC/networked communication so you can make big decisions in a service (like using a better suited language for some feature) without worrying about integrating it with every other feature in your monolith.
You can tailor the instance type to the service. Say you need some kind of video ingestion (or something) that is low use but needs a high memory ceiling. Would you rather pay for that memory across all your monolith instances or just run a few instances of a high memory service?
There's a lot of differences you're not thinking about.
To be fair, you CAN build a monolith in a way that is very similar to microservice architecture.
You can have a single function that handles all data base writes. A single function that monitors for failures. A single function that sends outbound notifications... etc
If these functions are call in a way that: allows for A/B testing, SUPER high latency (or no response), failure notifications sent to the code owner, automatic retries of failures, independent code deploys
Then, congratulations, you have a monolith made up of microservices!
> Fundamentally, I don't really see much difference between monoliths and microservices.
> In a monolith, you just call another function/class, but in microservices that function is a http call.
That right there is the fundamental difference.
For one thing, calling a function in the same process is going to be orders of magnitude faster than a network connection call and all that it entails. Even if performance doesn't matter at all in some use case, it's also additional cost to be running all these additional instances.
And then, complexity went up since a network call can fail in all kinds of additional ways that a jump to a function address in the local process cannot. So the code has to deal with all those. The complexity of correlating your logs and diagnostics also just went up. Your deployment automation and versioning complexity also now went up.
All these are solvable problems, of course. It just takes more people, time and budget. If the company is large enough sometimes it's worth it for the dev team decoupling aspects. If the company is tiny, it's approximately never worth taking on all this extra work and cost.
> Fundamentally, I don't really see much difference between monoliths and microservices. In a monolith, you just call another function/class, but in microservices that function is a http call.
In J2EE that difference is a configurable technical detail. You have 1 service that calls another, and the protocol they use can be a normal function call, RMI, SOAP (or I think REST nowadays) depending on dynamic configuration.
Micro-services are meant to scale the number of developers not the number of users. As the article points out they are meant to address organizational issues and they do - at a significant technical cost.
I don't think the number of users nor number of developers is really the deciding factor. Instagram, pre-Facebook acquisition, was a VERY simple application. It was literally just a chronological feed of (strictly square) photos with captions, you followed your friends, could explore hashtags, and not much more. Videos wouldn't even come for a few more years, let alone all the crazy stuff Facebook has hamfisted in there since.
For the scope of that app, it would have been absurd to use microservices. And I think most people who are in favor of microservices would say the same thing. To me, what microservices help with is when you're building an entire platform, rather than a single product. Not even necessarily on the scope of Facebook or Google, but I've worked at companies where one team might work on an app for managing social media accounts, and another app helps you optimize the SEO of your website. Neither of those things really want to own the concept of a user they both share, or deal with account creation and whatnot. So that's handled by a dedicated microservice.
Now, when you get to a size where you're building a platform, you're likely going to have lots of developers and users, but I don't think whether you use microservices is a function of either of those numbers, and they're just a side effect of the thing you've built.
But they COULD have been. Insta was 12 people when it got acquired. If they tried to be "cool" and did distributed systems, the team would have ballooned to at least 10x.
They're fine. But what's NOT fine are nanoservices. I did security on a project once where it seemed that every function was its own microservice. User registration, user login, and password resetting were each a separate microservice. It was an utter nightmare.
I'm realizing that the YouTube algorithm has no idea I'm a software/security engineer. It thinks I'm only into gaming, face plants, and dash cam videos.
Shoot, that's practically a monolith compared to the nightmare from which I resigned earlier this year. No way would they have allowed the entire password reset to reside in a single service. It would have been in the password-db-reader service, the password-db-mutator service, and the password-checker-service, among others. Absolutely terrifying and unmanageable.
This is the flavor of reductio ad absurdum example I always give when people hype up the "micro" part of microservices. Only meant to be illustrative— I had no idea that any org was actually doing that sort of nonsense, yikes.
For me it's the repetition in horizontal tasks. Need to update TLS version? Need to move to a new region? Need to add i18n? You have to do these things 20 times instead of one.
Hey! I enjoyed this post a lot. I agree with almost all your points raised except for single-team == no microservices. Your prose makes your points easy to understand.
I have a couple layout comments:
First, I love that you have a high contract text-to-background. That is really helpful for me. There was/is a trend to have light gray on white backgrounds for blogs; this is absolutely a terrible pattern. I appreciate that you did not go this route.
Second: serif fonts are difficult to read when the font size is relatively small. Something like Jura or similar could maintain the "terminal" feel without getting bogged down in serifs.
Third: I have a really hard time reading content when it uses smaller fonts and uses a minor fraction of the screen. This is what I see: https://imgur.com/a/NPCBkHJ -- I am getting older, and reading smaller fonts is increasingly difficult for me. I tend to keep my zoom at 150%, something about this page forced it back to 100%. I am not well versed in responsive design, so I don't know the technical details for it, but having zoom maintain or using a larger font would save some cognitive cost to older users like myself needing to zoom in.
I'm looking at micro services for a different reasons: security. I have given up on the idea that our code will ever be completely secure. However micro services means if someone breaks into one service they can't see data belonging to a different service. (that is run each service as a different user, and so OS protections means file commands cannot open such data)
This only protects against some threats related to insecure code, but layers of protection is the key to threats and it is useful for the parts it does help.
I could maybe see that for some specialized case, but for the general case it seems like the more independent, distributed things you're juggling, the more likely you are to end up with security holes in the first place. The time you have to spend on security would have to be spread too broad and thin.
The cost varies. Some of the code I work with is safety critical - people can die if it isn't working. If someone breaks into our system and gets private data that "only" costs us a lot of money, but if they break into our system and take over the safety critical parts people die.
OP makes a really good point in that dev teams also need to own their infrastructure when everything is a microservice. Asking "DevOps" to change an infra component prior to a release just shifts the monolith under the rug while also completely defeating the point of DevOps.
Lots of devs don't know infrastructure that well, though this is changing with the adoption of Kubernetes. Additionally, most devs don't want to go on-call when their app crashes unexpectedly.
"microservices is a psyop by big tech to make deploying & maintaining software so insanely difficult that future potential competitors are too tied up trying to keep the cloud equivalent of "hello world" afloat to present any real threat"
An interesting list, but is this specific to Microservices, or just Service Oriented Architecture in general?
For me, other than the obvious size difference, the difference between microservices and "large" (?) services is that a single team breaks down their domain into sensible layers, abstractions etc.
It's very hard to isolate load from different use cases, use a mixture of different technologies or combine batch, event driven and request/response paradigms within a monolith though.
I think certain things are easier to change in a monolith, where as other things are easier to change in a service based design. Depends what mistakes you've made along the way or how the spec/environment changes.
If I was CTO of a company, microservices would give me nightmares. How do you do due diligence on used free software (licenses and security updates)? How do you plan the resource usage of your whole setup if every developer can add a new autoscaling service? Who is actually keeping track on deployments so we don't accidentally overload the system? How do you refactor a cross-service feature consistently? And the worst part: Who keeps track of the n*n contracts between the services?
I mean yes, I know that each of these problems can be solved, sometimes in a relatively straightforward manner. But who really has all these aspects covered and doesn't run some services that started to smell weirdly a couple of months ago?
> How do you do due diligence on used free software (licenses and security updates)?
Use the same process you would use if you had a monolith.
The rest of your issues can be solved by planning out your services, rather than giving everyone free reign to make a new service. Switching to services doesn't magically mean your teams stop talking and designing together.
Yeah and the inter-service specification lives where? How is it monitored, tested, enforced?
I have the feeling that with the (quite possible!) addition of an inter-service codebase we would end up with a distributed monolith, i.e., a program that doesn't target a single computer but a particular substrate. I don't know whether that's a good design, though.
Benefits: The program becomes more transparent and resilient to nonfunctional problems. It is also much easier to replace parts of the program. Downsides: Executing on a developer's workstation (critical for productivity and quality, IMO) might become harder. Efficiency gets reduced by orders of magnitude in certain spots.
I find people’s reactions to microservices to be fascinating. I have always looked at them with the same framing as Unix userland tools - many small, focused apps that do something really well, coupled with a super generic IPC mechanism - in Unix’s case, using pipes or message queues.
But Unix isn’t all small tools. We have servers for the heavy work- like databases.
The challenge then becomes; how do you design that IPC mechanism? Maybe it exists! I don’t know the answer yet. But it’s something I think about a lot and I haven’t seen compelling evidence for “microservices are always bad, no exceptions”
Well, pipes are very nice and useful for small processing tasks. "I want to know how often the word 'Parameter' occurs in the source files inside this repository." That is great! pipes are your friend. They are the fastest way known to mankind to solve this problem. But they are also fragile. You may not have thought about the fact that the word NoParameter also occurs in the code base and you did not want to count that. Now, when writing something big and complicated pipes simply cannot keep up. Nobody wants a browser that is actually 100 small executables that communicate over pipes. It will be horrible. The IPC mechanism that you are looking for and that can actually handle what is needed for a browser is called 'function call' and it was already quite popular when C was introduced....
The interaction mechanisms between services, e.g. REST calls, is somewhere in between pipes and function calls. They are a bit more reliable than pipes and less reliable than function calls. They can be used for more complicated task than what pipes are used for but should not be used for task that are so complicated that they need function calls.
Now, I’ve not had the pleasure of working in an organisation using Microservices and so I’m only informed anecdotally. But I always assumed they where best used as an API boundary between teams, rather than adding more complexity to a teams work.
Imagine if all of AWS was a single executable file, but had to handle all the same scale, complexity, and release velocity. The S3 team would have to patiently coordinate with the… like… SageMaker team to figure out how big of servers they need, along with every other single possible shared concern.
I picked a ridiculous example there. But just trying to show that if you need services, it should be easy to argue for them on the merits, because it’s gonna be the least bad option.
I was going to say that one nice aspect is that rollbacks are less frequent because the size of each deployment is smaller. But I've worked on million line code bases that still do weekly CICD pretty well, so I don't know if that advantage really holds water.
> Never start with a microservice architecture if you have a single team.
This is probably a good point, however isn't the entirety of the story.
Personally, i agree that most teams shouldn't start out with microservices, monoliths can be entirely sufficient and are easier to run and reason about. Otherwise you might end up with so much operational complexity that you don't have much capacity left to actually develop the software and make sure that it's actually good.
However, you also need to think about the coupling within your monolith, so that if the need arises, it can be broken up easily. I actually wrote more about this in my blog, in an article called "Moduliths: because we need to scale, but we also cannot afford microservices": https://blog.kronis.dev/articles/modulith-because-we-need-to...
Where this goes wrong, is that no one actually thinks about this because their code works at that point in time, so they make their PDF report generation logic be tightly coupled to the rest of the codebase, same as with their file upload and handling logic, same with serving the static assets etc., so when suddenly you need to separate the front end from the back end, or extract one of the components because it's blocking updating to newer tech (for example, Java 8 to Java 11, everything else works, that one component breaks, so it would be more logical to keep it on the old/stable version for a bit, instead for it to block everything else), you just can't.
Sooner or later, containers also have to be brought up, since they can be a way to do a multitude of applications in a manageable way, but at the same time it's easy to do them wrong, perhaps due to not understanding the tech or some of the potential concerns.
Many out there think that "doing containers" involves taking their legacy monolith, putting it inside of a container and calling it a day. It isn't so, and you'll still have plenty of operational challenges if you do that. To do containers "properly", you'd need to actually look into how the application is configured, how it handles logging, external services, and how it handles persistent data. And it's not the "No true Scotsman" fallacy either, there are attempts to collect some of the more useful suggestions in actionable steps, for example: https://12factor.net/
(though those suggestions aren't related directly to containers alone, they can work wonderfully on their own, outside of container deployments)
Lastly, i've also seen Kubernetes be used as almost something synonymous to containers - in some environments, you can't have a conversation about containers without it being mentioned. I've also seen projects essentially fail because people chose it due to its popularity and couldn't cope with the complexity it introduced ("Oh hey, now we also need Istio, Kiali, Helm, oh and a Nexus instance to store Helm charts in, and we'll need to write them all, and then also have a service mesh and some key value store for the services"), when something simpler, like Docker Swarm or Hashicorp Nomad would have sufficed. I actually have yet another blog topic on the subject, "Docker Swarm over Kubernetes": https://blog.kronis.dev/articles/docker-swarm-over-kubernete...
(honestly, this also applies outside of the context of containers, for example, picking something like Apache Kafka over RabbitMQ, and then being stuck dealing with its complexity)
In conclusion, lots of consideration should be given when choosing both the architecture for any piece of software, as well as the tech to use to get it right. In some ways, this is slower and more cumbersome than just pushing some files to an FTP server that has PHP running there, but it can also be safer and more productive in the long term (configuration drift and environment rot). Sadly, if the wrong choices are made early, the bad design decisions will compound with time.
There is a huge amount of truth to this. I've been thinking of these as Microliths large monolithic codebases surrounded by service workers (lambda style or queue style).
The biggest problem I've seen is that early applications are not built in a modular fashion. More as a maze of twisty little functions calling each other, where you quickly end up with circular dependancies and other "challenges". If your base monolithic architecture mimics the world of microservices, modular single purpose functions and event busses to pass around information.
Yet another “based on my experience” opinion. Based on my experience our service would fail due to performance because we’ll , nodejs is still single thread. Given this we should either duplicate deployment of big service by roles that gives you same level of orchestrating complexity or rewrite in different languages, means hello microservices again :) ps our product was initially written by non tech cofounders that used heroku and microservices from day one. They used a swarm of small services to stay on free tier. So microservices in this use case are cheaper. And yes, it's simpler for non experience developer to get up to speed with your backend if it's a simple service.
Node can run multi-process with the Cluster module. Node being written as a monolith but running it in 100 instances is also an option. Roles can be implemented in software. There's much less "orchestrating complexity" when you're deploying a single service.
So Node being single-threaded is not itself a reason to use microservices.
I tend toward writing a monolith for the core API of a service, but then break out microservices for tasks that need to scale independently (or that need to run on high-memory/high-performance instances, for example). So I'm not totally against using microservices. But we should choose to use them when they're to our advantage to use them, not just "because they're already written that way."
If you deploy same binary under different roles, it's same issue with complexity.
With modern tooling deployment and managed storages is not a problem at all you use templates or buildbacks or even lambda combined with gitlab github CI abilities. Recent progress allows you to embrace zero-ops and microservices.
In my team we don't have a dedicated ops person, and deployment from dev to prod is running via git by developer.
Node itself has a very bad profiling tooling compares to more "adult" languages.
if you run a microservice it's much easier to spot a problem in CPU or especially memory leak.
And vise versa if you doing something in scala, or java, microservices benefits are minor.
It's also so much easier later to rewrite some of the services in a more performant language.
Every place I've been, the costs of microservices get overlooked in favor of the illusion of decoupling. Microservices will absolutely make simple features fully contained within a service easier, but as soon as a feature spans services or even impacts the contract of a service, you're in for more pain than in a monolithic architecture. Microservices sieze local simplicity at the cost of integration complexity.
Microservices, as a philosophy, is encoding your org design at the networking layer. I hope you get it the factoring right the first time, because it's going to be painful to change.
I'm all for small services where they make sense. But just like the advice that "all functions should be small" microservices has been myopically adopted as a design principle, rather than a debatable decision.