While microservices are certainly the buzzwordy trend of the month, developers really need to take a step back and ask if designing or refactoring a project into microservices is really helpful. I've talked to developers who have taken a single app that worked perfectly fine and split it into five or six services just because "microservices!" Now they're stuck maintaining five apps, five APIs, etc. and have yet to actually benefit from the split. My rule of thumb is: can I see parts of this project being used in another project? If the answer is yes, it gets considered. If no, it stays the working app that it is.
>I've talked to developers who have taken a single app that worked perfectly fine and split it into five or six services just because "microservices!" Now they're stuck maintaining five apps, five APIs, etc
And then when they need help fixing the mess they can call in a consultant wielding Fowlers latest book, or go to a bunch of new monolithichitecture conferences where people put up slides full of memes mocking microservices.
Truly we are blessed to live in an era of such efficient job creation.
wonder if they did it to increase their job security. "If I make something that only I can understand, my job is guaranteed, and if it's complicated the longer I can stay employed"
IMO, microservices are successful when the issue is scaling people (you can split into loosely coupled teams, one per service), or scaling data (different read / write patterns allow different data stores). I don't consider reuse as a criteria, unless as a project you mean a new feature within the existing app. It's not like Spotify, for example, is starting many new projects but they will benefit from the above.
Yes. A friend is at a company that went from a monolith to a microservices approach. (The service-to-engineer ratio is something like 1:2.) He says it's radically better because somebody after a new feature can mainly just compose existing services. But they had hundreds of employees before they made the switch.
Personally, I would start sooner than that. I think new technologies make service creation easier. But the right moment for me depends a lot on product, team, and tech.
Do serious shops have phases for these things anymore? In my view a modern shop should be developing and releasing frequently, and that engineers should experience operational consequences of development choices.
So I guess I would say that any place not working like that might not be ready for a microservices approach.
Another good use is operational conditions. Like certain parts of the app have different consmption/scalability needs, or certain services doesn't ever change, if at all. We currently have parts of our monolith that change ALL THE TIME, other parts haven't been updated in years. I'd personally like to partition out the stable code.
I think Containers are somewhat pushing Microservices a bit too early for some organizations because of Docker's focus on single entry point (sans supervisor, etc). They are useful for other reasons, but containers solve people problems, containers solve technical deployment problems.
You don't want N kinds of things if you can have 2 kinds of things (say, web tier, job workers for long running stuff, db layer), until you start having problems working out of the same codebases/repos/etc.
Microservices make things harder to debug and most likely there is still coordination that needs to happen, and it's likely going to be less efficient versus developing libraries -- so if you're a small company, you can probably ignore these for a while.
> can I see parts of this project being used in another project?
If yes, please, create a library.
Microservices share data, if you'll need part of the data in other projects, yes, they are the way to go. If you just need behavior, or need to decentralize CPU usage, there are plenty of easier ways to achieve those.
The possibility to not redeploy instantly when the library changes may be the biggest win one get from not using services.
Also, a library does not force people into the same technology stack. It makes it easier to work with the same programing language, but it's still possible to not do so. Microservices also often have complex data structures that only work well when created and parsed by the same set of tools. There is some difference, but much less than you are claiming.
Many libraries have functional equivalents in other languages. Is there some python library that's so awesome it would have no equivalent in Haskell? (If you say "NumPy" I think you're veering into framework territory.)
> I've talked to developers who have taken a single app that worked perfectly fine and split it into five or six services just because "microservices!"
We've also all met that developer that decided that Foo (library, framework, pattern, language - pick your poison) is SO MUCH BETTER than Bar (which is what you use now), and frantically sets out to replace all things Bar with Foo (and gives up halfway, so now you've got the maintain both Foo and Bar bit of your code, all intertwined).
The problem isn't with Foo and it's merits, it's actually very useful, the problem is that that developer is, frankly, shit.
Microservices is a perfectly useful pattern to achieve a number of ends, but it's not the answer to all questions. Know what you're trying to achieve, evaluate options on their actual merits (not what's buzzwordy this month) and make decisions consciously.
EDIT to add:
> can I see parts of this project being used in another project?
That's not the only benefit from microservices. Smaller units are easier to reason about (and test!), and they allow you to experiment: A monolithical app needs to maintain some sort of consistency. Most of the time it will be written in the same language, using the same general patterns and libraries supporting these patterns (eg. dependency injection using Guice or Spring). Most of the time, none of this will be cutting edge, some of the time it will be decidedly out of date, but changing it is hard and risky, and, well, it works.
With microservices your can use the languages, patterns and libraries that suit your isolated use case best. Need to support complex web forms? You might want to run Node.js so you can use the same JS code for validation on the front end and back end. Need lots of boring CRUD in your admin app? Ruby on Rails is your friend. Number-crunching? You might want to use Clojure or F#. Interfacing with some proprietary system (e.g. Sharepoint)? Use the best supported library you can find (.NET webservices client), and build a service up around that, on the platform that best supports that library (ASP.NET on IIS on Windows).
Launching a feature that maps across multiple microservices is also fun.
First, you need to issue pull requests for multiple projects and keep mental track of the status of those pull requests.
Second, [in a large organization] you might find out that changing the microservice on the server side breaks some client down the road because of new/unexpected parameters introduced, and the client handling it poorly.
Third, if the services [in a large organization] are distributed across multiple teams, you need to work around their schedules and their deploy calendars to orchestrate those changes.
Microservices help make your projects smaller and force developers to adhear to your interface. If you ever work with batshit insane devs tripping over each other, you respect the idea of those unbreakable interfaces.
That said, I heard a dev want to break up a low traffic app that had to take in a message, encrypt it, then put it on a queue want to break that into microservices. My first reaction is "what the fuck?". My second reaction is that an encryption service might not be a terrible idea, though I doubt ensuring folks don't roll broken encryption is why the dev proposed it.
SoA is nothing new and there are definitely times to use it much like there are times to use MVC and times not. However, it's a "buzz" right now because of the nature of many of the apps we have out there right now. Big data is also a buzz and fits well with SoA / microservices. Docker fits in here as well. The entire way the hosting landscape is shifting nurtures an SoA and the notion of running apps as VMs sans operating system is yet another support for SoA.
No, it's not appropriate for your blog. But it IS appropriate for many applications out there and it's an incredibly beneficial way to scale a set of services and build a complex application in a cost effective manner.
I don't quite agree with the notion of being "stuck maintaining five apps, five APIs, etc." because it's not. It's really one app. You're doing something wrong if you feel that you have five different apps and APIs. In fact, your testing and debugging should become easier under such an architecture. Maintenance should be much better if done properly. Of course regardless of your software architecture, you can end up in "maintenance hell" so that's really an invalid argument against SoA. I certainly wouldn't let the fear of bad programming prevent me from using a specific design pattern or architecture.
I would add in workers... is there something that should be run on a different system.. the typical example is image resizing. Even if a service will only be used in one place, if it's compute heavy, I'd lean towards putting it behind a work queue, on an independent service.
There's also the question about project complexity.. does the marketing website really need to be part of the application's project?
It's a matter of separating things out where it makes the most sense.
There definitely seems to be more of a tradeoff than people either want to admit, or realize upfront. My teammates were just talking yesterday about a small change that would touch 5 different repos and take some number of deployments to actually be realized. Of course there are ways to justify that sort of downside, but it's not a free lunch.
>I've talked to developers who have taken a single app that worked perfectly fine and split it into five or six services just because "microservices!"
I think the idea is to use microservices when they will be owned by separate teams. If your application is small enough that a single team can meaningfully work on it, then you probably don't need to break it up.
Could you explain that to my manager please? He wants our one team to maintain 6+ microservices, each in their own git repo. Its going to be such a cluster-schtuppe, I just know it, and no one cares to listen to reason...
This is what happens when you hire bunch of fresh grads or inexperienced engineers who blow their load whenever there's a chance to reinvent the wheel and hoping it will catch on.
Experienced engineers know that technology that has stood the test of time is boring. But we are not in the business of designing "fun" ways of working constantly, we are here to ship software on schedule and produce something that is easily understood and maintanable by next generation of engineers, instead of getting bamboozled by a new buzzwords invented by empty suits (agile, tdd, xp, microservice, mvc up your browshizzle), regardless, if not an engineer, than some "manager" who can't code reads some article on HN, and decides that's the way to do it.
Apps that follow the microservice pattern will not consist entirely of independent services, as there is still a need for a central component that can do request handler for responding to user events or routing and initiating individual service processes.
To reduce complexity and centralize common functionalities API Gateways such as KONG (https://github.com/mashape/kong) are the base of microservice architectures.
I contend that a lot of the issues related to microservices growing so popular is that we are missing boundaries in our applications. Instead of creating stronger boundaries between modules (or having well defined modules at all), we create physical boundaries between applications and computers to force a protocol based system.
You hit the nail on the head. Without discipline module boundaries aren't enforced as developers reach into internal APIs between modules to meet deadlines. The result is a complex web of dependencies that are difficult to understand and maintain.
Micro services are simply a physical boundary to enforce discipline between logical modules.
It is a solution to a symptom rather than solving the root causes.
But is treating the symptom rather than the root cause always a bad thing?
Alcoholics don't keep booze in the house either. That's treating a symptom, not a cause, but it's effective. And yes, I did compare programmers violating interface boundaries to alcoholics.
Good point and I do agree to an extent. However, without discipline we are simply moving the risk. Eg. We can't enforce interface boundaries in our system vs. we have a micro service architecture with N systems that we need to deploy as a monolith because we don't have the discipline to ensure backwards compatibility between systems.
Yes. My current enterprise client takes a week to deploy their micro service arch. Because all systems need to be deployed at once.
So, they simply traded one complexity for another without fixing the root problem--a disciplined development methodology.
Yeah, I think there's a problem with the definition of microservices in theory vs in practice. In theory, the boundary of a microservice is the api, right? In practice, the boundary is the set of things you must change in order to update one part. If you have to DEPLOY ALL THE THINGS!!!! in order to update a single component, it's not a microservices architecture. Not in practice, anyway. It's a monolith with a bunch of different moving parts.
I've never really explicitly thought of it this way, but you're exactly right. In a well-designed monolith (or early on in it's life), there should be a distinct flow of dependencies. However, over time, with muliple devs coming and going, and requirements piling on, is that the dependencies turn into a ball of yarn. Having a physical deployment boundary might reduce this aspect of intertwined modules.
Reading this makes me feel a little like I woke up in 1990.
>> Distributed systems are harder to program, since remote calls are slow and are always at risk of failure.
This is a true statement, but it's hardly the point on which current architecture choices turn. Distributed systems are the norm now. Almost everyone is working on one, even if they don't think of it that way (and I bet they don't). There are still simple, single-process programs that are relevant to users and the people working on them, but those aren't the domains in which this microservices debate is supposed to be taking place.
Is it relevant to evaluate "microservices" vs. a "monolithic architecture" based on the costs of traversing the network stack? Back when we were thinking about breaking up our C and C++ programs into processes using RPC as glue this was something we worried a lot about, but that is because in many cases having everything in a single process was still a credible alternative. Few people are wrestling with this choice today.
The last site I worked on consisted of nginx, elasticsearch, logstash, kibana, postgresql, celery, redis, and a bunch of custom python, java, and javascript code. It could run on one server or (as it did in production) twelve. Almost all of those pieces ran in separate processes, communicated over the network using mostly standard protocols, and did one specific thing. Were these microservices?
I feel like microservices as a thing is one of the least meaningful tech fads I have seen. Minimality and cohesiveness aren't surprising new challenges. They were desirable concepts of C++ class libraries two decades ago. An implementation of a service should always be both minimal and cohesive, and it should be complete. It should be as small as possible. Whether that is "micro" or not is entirely too subjective for me.
The other trade-offs mentioned, consistency and complexity, are not much more relevant to the big question the author is trying to convince us to ask. Consistency is a property of a view of state, but the article is about distributing code (otherwise why care about module boundaries and deployment?), not distributing state. Complexity is always a a key trade-off, but the complexity of distributed code is table stakes in the world we actually work in.
Things that don't need any synchronization or interesting state can of course easily go into microservices. This does not mean that application state should be stored in a concurrent model unless you have to.
My point was that most people don't consider the fact that calling MySQL or the same is a remote call and is just as likely to fail. Of course there are ways to avoid that, but most people don't think in those terms.
With a small team, we've refactored more than 150k LOC from being a single app into a lot of small services and we're not looking back.
To avoid increasing complexity in managing all these APIs we have added KONG [1] on top, so we can keep higher consistency and orchestrate common functionalities among multiple services. In some ways it's funny, because KONG "centralizes" a decentralized architecture but we needed one home base to always refer to or rely on,when most of the system started to live on the edges.
In space explorations you always have a mothership after all.
Well, dividing your problem into microservices is a lot more likely to work than another approach to change being pitched today - patching running programs on the fly. If you're considering that, something is horribly wrong with the system architecture.
As for "microservices", part of the problem is that the UNIX/Linux world has historically sucked at interprocess communication. Everything looks like a pipe, and you have to build something that works like a subroutine call on top of it. (Yes, there's System V IPC, which nobody uses.) The mismatch there results in much overhead associated with framing and such. Also, because the IPC and scheduler aren't integrated, each interprocess call tends to put either the sender or receiver at the end of the line for CPU time. This can add huge latency to service calls.
I've written hard real time code for robotics on QNX, which has a good MsgSend/MsgReceive system for calling other programs. Message passing and
That works out quite well, especially when some programs run at higher priorities than others and have hard time constraints. QNX doesn't have a really good system for starting up a set of programs and getting them communicating, though; I had to write something for that.
One lesson from QNX is that marshalling and interprocess communication should be separated if performance matters. IPC done right is fast, and marshaling done with code generated for each message format is fast. Generalized interpretive schemes like CORBA and JSON-based systems have much higher overhead than a subroutine call.
Another lesson is that better tools for managing groups of programs are necessary. It's a hard problem. Look at the "initd" mess. That's just a special case of managing a group of microservices. Just getting everybody connected up securely at startup is hard.
As for "microservices", part of the problem is that the UNIX/Linux world has historically sucked at interprocess communication. Everything looks like a pipe, and you have to build something that works like a subroutine call on top of it. (Yes, there's System V IPC, which nobody uses.)
What about Datagram UNIX sockets using sendmsg/recvmsg? I've implemented a simple RPC using them and it seemed fine.
In the Java world there is the idea that you make interfaces for services and then a dependency injection systems decides which ones to create at runtime.
Many useful microservices can be implemented with a key-value store, and this could be implemented with anything from an in-memory Hashtable or something that uses the disk or off-heap memory or runs in a huge distributed cluster.
In that case you get to use in-process when it is expedient (like literally, you cut power and latency if you don't waste time turning floating points numbers into strings and whatnot.) When out-of-process makes sense you are ready.
To accompany this you need frameworks and tools that eliminate a lot of overhead, for instance, to automatically generate the stub code for service calls, manage a large number of servers, etc.
It's hard to correlate the points made in this post with eventual consistency. Microservice or any service's runtime distribution and deployment model has impacts on consistency but that doesn't imply that systems will automatically become eventually consistent. Nothing prevents service instance clustering and co-location on the same hardware machine in different runtime units like containers or vm's talking to a db running on a single machine. It's a function of the scale and maturity of the system. Consistency itself is a function of state management within a system and more importantly, a non-trivial system typically has quite a few data stores and access patterns with their respective consistency requirements. Saying that running microservices implies eventual consistency is painting with very broad brush strokes.
I don't know if this was researched by system theorist. A spectrum of scale from micro to mono. When micro communication costs are too high, integrate, rinse, repeat.
> Many organizations will find the difficulty of handling such a swarm of rapidly changing tools to be prohibitive.
I've been working at Giant Swarm[1] for the last few months as the evangelist. Our original intent was to focus on providing a PaaS-like containerized stack in a multi-tenant/public cloud offering; something that felt a bit like Heroku, but less restrictive stack-wise. We're now engaging in a few professional services deals, given the high demand for an easy-to-deploy/easy-to-run container system.
I think the high demand we are seeing in the ecosystem is coming from organizations/individuals who want to understand how to use and deploy containerized stacks. I see the primary problems being people are unable to keep up with the innovation in the space (drinking from a firehose) and are unable to find the properly skilled labor force to implement a solution (training/expereience lagging behind demand). That observation would directly relate to Martin's comments on operational complexity.
I will note I observed a similar phenomenon with OpenStack, so my observations are certainly not an indicator that people will be using these deployments initially. It will likely take some time before microservice-based develplment is a common pattern in larger organizations. At the very least, they know they need to be thinking about it...
One benefit of a microservices/SOA architecture that I never see mentioned is that it can make you a more attractive acquisition target (if your acquirer is doing decent technical due diligence.) There's a good chance that your acquirer (a) just wants a specific part of your stack, not the whole monolithic codebase with things they don't care about, and (b) they want to integrate it into their existing systems, which are very likely written in a different language.
> Operational Complexity: You need a mature operations team to manage lots of services, which are being redeployed regularly.
IMO you need first and foremost a mature development team that releases operable, diagnosable and fixable (micro)services. Managing a swarm of mini component is a lot easier when each part is manageable to being with.
> Operational Complexity: Tooling is still immature, but my instinct tells me that even with better tooling, the low bar for skill is higher in a microservice environment.
Seems like most of the tooling now focuses on automating small parts of dev and ops teams' workflows, instead of looking at the larger picture of organizational tooling. I would guess that in a few years we'll see a lot more PaaS-workflow solutions focused on abstracting away a lot of the operational complexity of microservice architectures, reducing the barrier to entry for "maturity" of dev/ops teams, just like AWS & co reduced the need for people to really understand server hardware.
I like the approach of building a monolith (with smaller libraries), and then split it once you need to. The argument is that you're likely to choose the wrong services to seperate until after you've built a bunch of the system.
I never worked with a client that built libraries (eg. Jars or dlls, etc). The reason was because it was hard and took effort. So we ended up with monolith applications.
Fast forward to today we have essentially jars and Dlls with the added overhead of an entire system and deployment to maintain.
I'm surprised we haven't seen any attempts at frameworks that try and solve this: something that allows you to start with a monolithic app, but is designed to split off bits and pieces into stand alone microservices.
Or maybe this exists and hasn't got a lot of press. Or it has been tried and failed.
If you use frameworks where components communicate by exchanging messages then it's quite easy to start with a single app and later scale it to multiple services. With frameworks like Akka (Actors) or Vert.x the communication can be local or remote and the calling code is the same for both.
Despite having an MBA, I do, on occasion, find myself involved in a discussion about architecture.
My rule of thumb when deciding whether to deliver a piece of functionality as a service is "Will another application/system/whatever ever want to be able to use this functionality?"
If the answer is yes, that would be a fairly compelling argument for implementing it as a service. If the answer is no, then the question becomes "Why not implement it monolithically?"
I think I've spent at least 50% of my career either contributing to or dismantling one Fowler-inspired disaster or another, so I hear you. That said people are fallible and should generally get credit for changing their minds about things.
"the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. "
Martin Fowler has some interesting ideas and writes some interesting articles, but he's not architecture jesus laying down inviolable laws on mere mortals. We get to virtualize the concepts at will. If I declare that all of my microservices get combined into a single process at the deploy stage, so let it be said, so let it done. Martin Fowler's opinion on it has no bearing.
Coupling and cohesion are not specific architecture choices like micoservices/business objects/etc but more of a way to describe the program design/architecture.
You seem to have an issue with the author but you don't address his arguments. What's the point? Otherwise it just comes off as gratuitous negativity and ad-hominem.
Why is there a consistent drive in the industry to invent new buzzwords and apply untested, complicated ways of working?
The compromise you have to make to have a micro service architecture doesn't make sense for anyone else other than Google or Amazon or extremely large organizations.
Even with such architecture in place, you are going to end up with far more overhead by using microservices, it simply isn't the case that by isolating individual components into functions, you suddenly get productivity.
It just infuriates me when engineers or product managers bored with their job constantly invent buzzwords to confuse, increase complexity, end up failing, and back to just regular old boring tech.
If it ain't broke don't fix it. Why the fuck would you want to now have 100 different API end points to do something that would've taken less than 50 or so lines of code? This doesn't make sense for 99% of software companies out there.
The majority of software companies (by employment) are enterprises that have thousands of legacy endpoints in different protocols like SOAP, MQ, or CORBA, deployed in various monolithic shapes.
They damn well will get a lot of benefits from microservices for their newer capabilities... IF they also work on the operational aspects (continuous delivery, a devops culture, and some kind of automated operating platform).
No one sane is publically advocating building microservices for a single team small app.