Hacker News new | past | comments | ask | show | jobs | submit login
Monolith First (2015) (martinfowler.com)
820 points by asyrafql on Feb 19, 2021 | hide | past | favorite | 340 comments



A more common scenario I see is that people start with a monolith that ends up inheriting all the conceptual debt that accumulates as a project evolves. All this debt builds up a great desire for change in the maintaining team.

A champion will rise with a clean architecture and design in microservice form that addresses all high visibility pain points, attributing forecasted benefits to the perceived strengths of microservices. The team buys into the pitch and looks forwards to a happily-ever-after ending.

The reality though is that the team now has multiple problems, which include:

- Addressing conceptual debt that hasn't gone away. - Discovering and migrating what the legacy system got right, which is often not documented and not obvious. - Dealing with the overheads of microservices that were not advertised and not prominent at a proof-of-concept scale. - Ensuring business continuity while this piece of work goes on.

I would propose alternative is to fix your monolith first. If the team can't rewrite their ball of mud as a new monolith, then what are the chances of successfully rewriting and changing architecture?

Once there is a good, well functioning monolith, shift a subset of responsibility that can be delegated to a dedicated team - the key point is to respect Conway's law - and either create a microservice from it or build a new independent monolith service, which aligns more to service oriented architecture than microservices.


When the same practices used to sell microservices get applied to modules, packages and libraries, there is no need to put a network in the middle to do the linker's job.

But too many are eager to jump into distributed computing without understanding what they are bring into their development workflow and debugging scenarios.


> there is no need to put a network in the middle to do the linker's job.

Absolutely. More people need to understand the binding hierarchy:

- "early binding": function A calls function B. A specific implementation is selected at compile+link time.

- "late binding": function A calls function B. The available implementations are linked in at build time, but the specific implementation is selected at runtime.

From this point on, code can select implementations that did not even exist at the time function A was written:

- early binding + dynamic linking: a specific function name is selected at compile+link time, and the runtime linker picks an implementation in a fairly deterministic manner

- late binding + dynamic linking: an implementation is selected at runtime in an extremely flexible way, but still within the same process

- (D)COM/CORBA: an implementation of an object is found .. somewhere. This may be in a different thread, process, or system. The system provides transparent marshalling.

- microservices: a function call involves marshalling an HTTP request to a piece of software potentially written by a different team and hosted in a datacenter somewhere on a different software lifecycle.

At each stage, your ability to predict and control what happens at the time of making a function call goes down. Beyond in-process functions, your ability to get proper backtraces and breakpoint code is impaired.


I think a trap that many software engineers fall into is interpreting "loose coupling" as "build things as far down the binding hierarchy as possible".

You see this under various guises in the web world - "event-driven", "microservices", "dependency injection", "module pattern". It's a very easy thing to see the appeal of, and seems to check a lot of "good architecture" boxes. There are a lot of upsides too - scaling, encapsulation, testability, modular updates.

Unfortunately, it also incurs a very high and non-obvious cost - that it's much more difficult to properly trace events through the system. Reasoning through any of these decoupled patterns frequently takes specialized constructs - additional debugging views, logging, or special instances with known state.

It is for this reason that I argue that lower-hierarchy bindings should be viewed with skepticism - if you _cannot_ manage to solve a problem with tight coupling, then resort to a looser coupling. Introduce a loose coupling when there is measurable downside to maintaining a tighter coupling. Even then, choose the next step down the heirarchy (i.e. a new file, class, or module rather than a new service or pubsub system).

Here, as everywhere, it is a tradeoff about how understandable versus flexible you build a system. I think it is very easy to lean towards flexibility to the detriment of progress.


"microservices turn function calls into distributed computing problems" -- tenderlove


This isn't really true though. It's not like you're suddenly adding consensus problems or something, you don't need to reinvent RAFT every time you add a new service.

Microservices put function calls behind a network call. This adds uncertainty to the call - but you could argue this is a good thing.

In the actor model, as implemented in Erlang, actors are implemented almost as isolated processes over a network. You can't accidentally share memory, you can't bind state through a function call - you have to send a message, and await a response.

And yet this model has led to extremely reliable systems, despite being extremely similar to service oriented architecture in many ways.

Why? Because putting things behind a network can, counter intuitively, lead to more resilient systems.


It's still a distributed problem if it's going over a network, even if you don't need consensus specifically.

I don't think you would get the same benefits as Erlang unless you either actually write in Erlang or replicate the whole fault tolerant culture and ecosystem that Erlang has created to deal with the fact that it is designed around unreliable networks. And while I haven't worked with multi-node BEAM, I bet single-node is still more reliable than multi-node. Removing a source of errors is still less errors.

If your argument is that we should in fact run everything on BEAM or equivalently powerful platforms, I'm all on board. My current project is on Elixir/Phoenix.


> replicate the whole fault tolerant culture and ecosystem

I think the idea is to move the general SaaS industry from the local monolith optimum to the better global distributed optimum that Erlang currently inhabits. Or rather, beyond the Erlang optimum insofar as we want the benefits of the Erlang operation model without restricting ourselves to the Erlang developer/package ecosystem. So yeah, the broader "micro service" culture hasn't yet caught up to Erlang because industry-wide culture changes don't happen over night, especially considering the constraints involved (compatibility with existing software ecosystems). This doesn't mean that the current state of the art of microservices is right for every application or even most applications, but it doesn't mean that they're fundamentally unworkable either.


At the language level, I think you probably need at least the lightweight, crashable processes. I don't think you can just casually just bring the rest of the industry forward to that standard without changing the language.


Can you please elaborate on specific principles behind Erlang operation model that other platforms can embrace?


Not exactly sure what bits GP is referring to, but IMO this is a good summary of what makes Erlang work: https://ferd.ca/the-zen-of-erlang.html


> putting things behind a network can, counter intuitively, lead to more resilient systems

Erlang without a network and distribution is going to be more resilient than Erlang with a network.

If you're talking about the challenges of distributed computing impacting the design of Erlang, then I agree. Erlang has a wonderful design for certain use cases. I'm not sure Erlang can replace all uses of microservices, however, because from what I understand and recall, Erlang is a fully connected network. The communication overhead of Erlang will be much greater than that of a microservice architecture that has a more deliberate design.


Playing a bit of devil's advocate, erlang is more reliable with more nodes by its design because it is expecting hardware to fail, and when that hardware does fail it migrates the functionality that was running on it to another node. With only a single node then you have a single point of failure, which is the antithesis of erlangs design.


> Erlang without a network and distribution is going to be more resilient than Erlang with a network.

I doubt that this is supposed to be true. Erlang is based on the idea of isolated processes and transactions - it's fundamental to Armstrong's thesis, which means being on a network shouldn't change how your code is built or designed.

Maybe it ends up being true, because you add more actual failures (but not more failure cases). That's fine. In Erlang that's the case. I wouldn't call that resiliency though, the resiliency is the same - uptime, sure, could be lower.

What about in other languages that don't model systems this way? Where mutable state can be shared? Where exceptions can crop up, and you don't know how to roll back state because it's been mutated?

In a system where you have a network boundary, and if you follow Microservice architecture you're given patterns to deal with many others.

It's not a silver bullet. Just splitting code across a network boundary won't magically make it better. But isolating state is a powerful way to improve resiliency if you leverage it properly (, which is what Microservice architecture intends).

You could also use immutable values and all sorts of other things to help get that isolation of state. There's lots of ways to write resilient software.


Distributed systems is more than just consensus protocols.

At minimum, you need to start dealing with things like service discovery and accounting for all of the edge cases where one part of your system is up while another part is down, or how to deal with all of the transitional states without losing work.

> Why? Because putting things behind a network can, counter intuitively, lead to more resilient systems

If you're creating resiliency to a set of problems that you've created by going to a distributed system, it's not necessarily a net win.


> At minimum, you need to start dealing with things like service discovery and accounting for all of the edge cases where one part of your system is up while another part is down, or how to deal with all of the transitional states without losing work.

I really don't get this phobia. You already have to deal with that everywhere, don't you? I mean, you run a database. You run a web app calling your backend. You run mobile clients calling your backend. You call with services. The distributed system is more often than not already there. Why are we fooling ourselves into believing that just because you choose to bundle everything in a mega-executable that you're not running a distributed system?

If anything,explicitly acknowledging that you already run a distributed system frames the problem ina way that you are forced to face failure modes you opt to ignore.


> You can't accidentally share memory, you can't bind state through a function call - you have to send a message, and await a response.

These are not strictly a "program on network vs program not on network" issue. "Accidentally sharing memory" can be solved by language design. It is correct that a system that is supposedly designed as inter-actor communication is not ideal when designed and written in a "function call-like" manner, but erlang only partially solves this phenomena by enforcing a type of architecture, which locks away the mentioned bad practice.

> ...despite being extremely similar to service oriented architecture in many ways. Why? Because putting things behind a network can, counter intuitively, lead to more resilient systems.

This is a strange logic. Resilient system is usually achieved by putting a service/module/functionality/whatever in a network of replications, meanwhile service oriented architecture talks about loosening the coupling between different computers that acts differently.

I do agree on microservice != turn function calls into distributed computing problems.

It MAY happen to systems eagerly designed with microservice architecture without a proper prior architectural validations, but it is not always the case.


> "Accidentally sharing memory" can be solved by language design.

For sure - I am definitely not trying to say that there is "one true approach".


Why? Because of the engineering work put into BEAM


I don't think that's the case at all, and I don't think that Armstrong would either.


You're probably right that Armstrong would be too humble to say it, but increased engineering effort is the only conceivable mechanism for unreliable networks to produce more reliable systems. I can't even tell what you think is going on.


In his initial thesis on Erlang, and in any talk I've seen him give, or anything I've seen him write, he attributes reliability to the model, not to BEAM.


You didn’t continue far enough down the hierarchy to find the places where microservices actually work. Honestly if you’re just calling a remote endpoint whose URL is embedded in your codebase and with the hard coded assumption that it will follow a certain contract you’re not moving down the binding hierarchy at all - you ‘re somewhere around ‘dynamic library loading’ in terms of indirection.

It gets much more interesting when you don’t call functions at all - you post messages. You have no idea which systems are going to handle them or where... and at that point, microservices are freeing.

All this focus on function call binding just seems so... small, compared to what distributed microservice architectures are actually for.


Thanks, this was a great overview of the various ways to bind a function call to an implementation.


If you think of microservices as a modularization tool - a way to put reusable code in one place and call it from other places - then you are missing the point. Microservices don’t help solve DRY code problems.

Monoliths aren’t merely monolithic in terms of having a monolithic set of addressable functionality; they are also monolithic in terms of how they access resources, how they take dependencies, how they are built, tested and deployed, how they employ scarce hardware, and how they crash.

Microservices help solve problems that linkers fundamentally struggle with. Things like different parts of code wanting to use different versions of a dependency. Things like different parts of the codebase wanting to use different linking strategies.

Adding in network hops is a cost, true. But monoliths have costs too: resource contention; build and deploy times; version locking

Also, not all monolithic architectures are equal. If you’re talking about a monolithic web app with a big RDBMS behind it, that is likely going to have very different problems than a monolithic job-processing app with a big queue-based backend.


Monoliths don't have to be deployed as single instance.

In any case, too many rush for micro-services with the intent reason to use the network as a package boundary.

Have you ever tried to debug spaghetti RPC calls across the network?

I sadly have.


Adding to the theme of microservices overhead, I feel like the amount of almost-boilerplate code used to get and set the contents of serialized data in the service interfaces exceeds the code used to actually do useful work even in medium sized services, much less microservices.


It is possible that there are architectures other than ‘monolith’ and ‘microservices’. Component architectures are also a thing. If it doesn’t all deploy in one go, I don’t think it’s a monolith.


Certainly, the microservices cargo cult has ensured that everything that isn't a microservices, is now a monolith.


And apparently distributed RPC is ‘microservices’.

I think we can all agree that straw man architectures don’t work. Everyone should employ the true Scotsman architecture.


It is how a large majority of microservices gets implemented in practice.

What was originally a package, gets its own process and REST endpoint, sorry nowadays it should be gRPC, the network boilerplate gets wrapped in nice function calls, and gets used everywhere just like the original monolith code.

Just like almost no one does REST as it was originality intended, most microservices end up reflecting the monolith with an additional layer of unwanted complexity.


Okay, but if you’re allowed to submit ‘well structured modular componentized librarified monolith’ as an example proving how monoliths aren’t all bad, I’m not going to let you insist on holding up ‘cargo cult grpc mess’ as proof that microservices are terrible. Let’s compare the best both structures have to offer, or consider the worst both enable - but not just compare caricatures.


On the contrary, as the sales speech at conferences is that the mess only happens with monoliths and the only possible salvation is to rewrite the world as microservices.

Good programming practices to refactor monoliths never get touched upon, as otherwise the sale would lose its appeal.


LOL. This is excellent point. At my workplace one might just get fired if they talk anything other than microsevice for any new development.


An interview for me turned from going well to disaster because of a single mention I made that I was not a fan of seeing every single project with the microservice lens. That kind of rustled the interviewees who strongly believed that anything not written as microservice today is not worth building. I thought that was a pretty inoffensive and practical statement I could support with points but who knew that would derail the entire interview.


Why yes, I have.

It is essentially impossible without the (sadly rare) design pattern that I gave at https://news.ycombinator.com/item?id=26016854.


You might be interested in OpenTracing/OpenTelemetry, in case you’re not aware of it: https://opentracing.io/


I wasn't.

If I am ever unfortunate enough to work on a microservices architecture again, I'll see if I can get it used.


most of what you said was solved decades ago in linking


It is much harder to enforce the discipline of those practices across modules boundaries than around network boundaries. So in theory yes you could have a well modularized monolith. In practice it is seldom the case.

The other advantage of the network boundary is that you can use different languages / technologies for each of your modules / services.


But the network is so slooooooowwwwwwwww.

Don't get me wrong, sometimes it's worth it (I particularly like Spark's facilities for distributed statistical modelling), but I really don't get (and have never gotten) why you would want to inflict that pain upon yourself if you don't have to.


This. One million times this.

I’ve been developing for more years than dime if you have lived, and the best thing I’ve heard in years was that Google interviews were requiring developers to understand the overhead of requests.

In addition, they should require understanding of design complexity of asynchronous queues, needing and suffering from management overhead of dead letter, scaling by sharding queues if it makes more sense vs decentralizing and having to have non-transactionality unless it’s absolutely needed.

But not just Google- everyone. Thanks, Mr. Fowler for bringing this into the open.


Indeed! The "Latency Numbers Every Programmer Should Know" page from Peter Norvig builds a helpful intuition from a performance perspective but of course there's a lot larger cost in terms of complexity as well.


I mean, you can always deploy your microsevices on the same host, it would just be a service mesh.

Adding network is not a limitation. And frankly, I don't understand why you say things like understanding network. Like reliability is taken care of, routing is taken care of. The remaining problems of unboundedness and causal ordering are taken care of (by various frameworks and protocols).

For dlq management, you can simply use a persistent dead letter queue. I mean it's a good thing to have dlq because failures will always happen. About which order to procese queue etc. These are trivial questions.

You say things as if you have been doing software development for ages, but you're missing out on some very simple things.


That’s because those things are only simple when they’re working.

Each of them introduces a new, rare failure mode; because there are now many rare failure modes, you have frequent-yet-inexplicable errors.

Dead letter queues back up and run out of space; network splits still happen;


Sounds like you're saying "Don't do distributed work" if possible (considering tradeoffs of course, but I guess people just don't even consider this option is your contention).

And secondly, if you do end up with q distributed systems, remember how many independently failing components there are because thag directly translates to complexity.

On both these counts I agree. Microservices is no silver bullet. Network partitions and failure happen almost every day where I work. But most people are not dealing with that level of problems, partly because of cloud providers.

Same kind of problems will be found on a single machine also. Like you'd need some sort of write ahead log, checkpointing, maybe optimize your kernel for faster boot up, heap size and gc rate.

All of these problems do happen, but most people don't need to think about it.


I'm not reading this as "Don't do distributed work". It's "distributed systems have nontrivial hidden costs". Sure, monoliths are often synonymous with single points of failure. In theory, distributed systems are built to mitigate this. But unfortunately, in reality, distributed systems often introduce many additional single points of failure, because building resilient systems takes extra effort, effort that oftentimes is a secondary priority to "just ship it".


If you are using a database, a server and clients, you already have a distributed system.

You'll also likely use multiple databases (caching in e. g. Redis) and a job queue for longer tasks.

You'll also probably already have multiple instances talking to the databases, as well as multiple workers processing jobs.

Pretending that the monolith is a single thing is sneakily misleading. It's already a distributed system


Indeed. So with monolith usually we already have 3-4 (or more) somewhat reliable systems, and one non-reliable system which is your monolithic app. Why add other non reliable systems if you don't really need it?

Making a system to be reliable is really really hard and take many resources, which seldom companies pursuit.


just because all the code is in one place doesn't mean its one system.


Compare the overheads and failure modes of a request to that of a method call, thats the comparison.

Requests can fail in a host of ways that a call simply cannot, the complexity is massively greater than a method call.


+1

I realized this one day when I was drawing some nice sequence diagrams and presenting it to a senior and he said "But who's ensuring the sequence?". You'll never ask this question in a single threaded system.

Having said that, these things are unavoidable. The expectations from a system are too great to not have distributed systems in picture.

Monoliths are so hard to deploy. It's even more problematic when you have code optimized for both sync cpu intensive stuff and async io in the same service. Figuring out the optimal fleet size is also harder.

I'd love to hear some ways to address this issue and also not to have microservice bloat.


I heartily agree. Don't treat yourself like a 3rd party.


What you tend to see, is the difficulty of crossing these network boundaries (which happens anyway) _and_ all the coupling and complexity of a distributed ball of mud.

Getting engineers who don't intuitively understand or maybe even care how to avoid coupling in monoliths to work on a distributed application can result in all the same class of problems plus chains of network calls and all the extra overhead you should be avoiding.

It seems like you tell people to respect the boundaries, and if that fails you can make the wall difficult to climb. The group of people that respect the boundaries whether virtual or not, will continue to respect the boundaries. The others will spend huge amounts of effort and energy getting really good at finding gaps in the wall and/or really good at climbing.


If you're using microservice to enforce stronger interface boundaries what you're really relying on are separate git repos with separate access control to make it difficult to refactor across codebases. A much simpler way to achieve that same benefit is to create libraries developed in separate repos.


Wow. That's an interesting take on the problem of scaling a monolith without introducing the network between the system interactions.


Hum... People have been doing it since the 60's, when it became usual to mail tapes around.

If you take a look at dependency management at open source software, you'll see a mostly unified procedure, that scales to an "entirety of mankind" sized team without working too badly for single developers, so it can handle your team size too.

That problem that the "microservices bring better architectures" people are trying to solve isn't open by any measure. It was patently solved, decades ago, with stuff that work much better than microservices, in a way that is known to a large chunk of the developers and openly published all over the internet for anybody that wants to read about.

Microservices still have their use. It's just that "it makes people write better code" isn't true.


it's also possible to have more than one microservice in a single repo. defining good interfaces is a problem unsolved by repo size and count.


> So in theory yes you could have a well modularized monolith.

I've often wondered if this is a pattern sitting underneath our noses. I.e., Starting with a monolith with strong boundaries, and giving architects/developers a way to more gracefully break apart the monolith. Today it feels very manual, but it doesn't need to be.

What if we had frameworks that more gracefully scaled from monoliths to distributed systems? If we baked something like GRPC into the system from the beginning, we could more gracefully break the monolith apart. And the "seams" would be more apparent inside the monolith because the GRPC-style calls would be explicit.

(Please don't get too hung up on GRPC, I'm thinking it could be any number of methods; it's more about the pattern than the tooling).

The advantages to this style would be:

* Seeing the explicit boundaries, or potential boundaries, sooner.

* Faster refactoring: it's MUCH easier to refactor a monolith than refactor a distributed architecture.

* Simulating network overhead. For production, the intra-boundary calls would just feel like function calls, but in a develop or testing environment, could you simulate network conditions: lag, failures, etc.

I'm wondering if anything like this exists today?


Aren't you just describing traditional RPC calls? Many tools for this: DBus on Linux, Microsoft RCP on Windows, and more that I'm not aware of.

If you've only got a basic IPC system (say, Unix domain sockets), then you could stream a standard seriaization format across them (MessagePack, Protobuf, etc.).

To your idea of gracefully moving to network-distributed system: If nothing else, couldn't you just actually start with gRPC and connect to localhost?

Is there something I'm missing?


You are missing the network.

When you start with gRPC and connect to localhost, usually the worst that can happen with a RPC call is that the process crashes, and your RPC call eventually times out.

But other than that everything else seems to work as a local function call.

Now when you move the server into another computer, maybe it didn't crash, it was just a network hiccup and now you are getting a message back that the calling process is no longer waiting, or you do two asynchronous calls, but due to the network latency and packet distribution, they get processed out of order.

Or eventually one server is not enough for the load, and you decide to add another one, so you get some kind of load mechanism in place, but also need to take care for unprocessed messages that one of the nodes took responsibility over, and so forth.

There is a reason why there are so many CS books and papers on distributed systems.

Using them as mitigation for teams that don't understand how to write modular code, only escalates the problem, you move from spaghetti calls in process, to spaghetti RPC calls and having to handle network failures in the process.


If you develop with COM in mind, then it doesn't matter whether the object is in-process or somewhere on the network.


That is the selling theme of DCOM and MTS, CORBA and many other IPC mechanisms, until there is a network failure of some sort that the runtime alone cannot mitigate.


Well, we often use plain old HTTP for this purpose, hence the plethora of 3rd party API's one can make an HTTP call to...

(I side with the monolith, FWIW...I love Carl Hewitt's work and all, it just brings in a whole set of stuff a single actor doesn't need... I loved the comment on binding and RPC above, also the one in which an RPC call's failure modes were compared to the (smaller profile) method call's)


Sure but the converses of these statements are advantages for monoliths.

Most languages provide a way to separate the declaration of an interface from its implementation. And a common language makes it much easier to ensure that changes that break that interface are caught at compile time.


I have worked on some horrendously coupled micro-service architectures.


Not at all. That’s what libraries are for. Most applications already use lots of libraries to get anything done. Just use the same idea for your own organisations code.


Not to mention the operational overhead, which grows exponentially with the number of services, languages, frameworks and the sum of all their quirks.


Small nitpick, it's probably the product of their quirks rather than the sum.


It depends on whether you're measuring in the linear or logarithmic domain; eg 2 options * 2 options = 4 options, but 1 bit + 1 bit = 2 bits.


Correct! :D


I've been trying to get buy-in from colleagues to have stricter boundaries between modules but without much success, mainly because I don't fully understand how to do it myself.

Let's say we have 3 different modules, all domains: sales, work, and materials. A customer places an order, someone on the factory floor needs to process it, and they need materials to do it. Materials know what work they are for, and work knows what order it's for (there's probably a better way to do this. This is just an example).

On the frontend, users want to see all the materials for a specific order. You could have a single query in the materials module that joins tables across domains. Is that ok? I guess in this instance the materials module wouldn't be importing from other modules. It does have to know about sales though.

Here's another contrived example. We have a certain material and want to know all the orders that used this material. Since we want orders, it makes sense to me to add this in the sales module. Again, you can perform joins to get the answer, and again this doesn't necessarily involve importing from other modules. Conceptually, though, it just doesn't feel right.


It is a bit hard to just explain in a bunch of comments.

In your examples you need to add extra layers, just like you would do with the microservices.

There would be the DTOs that represent the actual data that gets across the models, the view models that package the data together as it makes sense for the views, the repository module that actually abstracts if the data is accessed via SQL, ORM, RPC or whatever.

You should look into something like:

"Domain-Driven Design: Tackling Complexity in the Heart of Software"

https://www.amazon.com/Eric-J-Evans/dp/0321125215

"Component Software: Beyond Object-Oriented Programming"

https://www.amazon.com/Component-Software-Beyond-Object-Orie...


You need a thin layer on top that coordinates your many singular domains. We use graphql to do this. API Gateway or backend for frontend are similar concepts. Having many simple domains without any dependencies is great, they just need to be combined by a simple with many dependencies - a coordination layer.

Joining on the database layer is still adding a dependency between domains. The data models still need to come out of one domain. Dependencies add complexity. So joining is just like importing a module, but worse because it's hidden from the application.

If you really need a join or transaction, you need to think as if you had microservices. You'd need to denormalize data from one domain into another. Then the receiving domain can do whatever it wants.

Of course, you can always break these boundaries and add dependencies. But you end up with the complexity that comes with in, in the long run.


It seems like maybe your onion could use an additional layer or two.

If I understand your example, the usual solution is to separate your business objects from your business logic, and add a data access layer between them.

In terms of dependencies, you would want the data access layer module to depend on the business object modules. And your business logic modules would depend on both the data access layer and business object modules. You may find that it is ok to group business objects from multiple domains into a single module.

Note that this somewhat mirrors the structure you might expect to see in a microservices architecture.


More generous take on the meta: people do understand (some of) the problems involved but are less worried about debugging scenarios than resume padding. The most useful property of microservices is architecturing CVs.


Correct. What's best for the long term health of the business is not taken into consideration. The Board of Directors and the CEO only care about this quarter and this year, why would the foot soldiers take a long view?

As an engineer, the thought process goes: I can use the same old tried and true patterns that will just get the job done. That would be safe and comfortable, but it won't add anything to my skillset/resume.

Or we could try out this sexy new tech that the internet is buzzing about, it will make my job more interesting, and better position me to move onto my next job. Or at least give me more options.

It's essentially the principal-agent problem. And by the way, I don't blame developers for taking this position.


I feel there is also a chicken-egg problem. Is IT hype driven because of RDD or vice versa? I also do not blame any party involved for acting like they do.


Yep. I am working on a Mongo system, started right about when Mongo was peak hype. This application should be using a relational database, but no resume padding for the original developer, and a ton of tech debt and a crappy database for me to work with.


What are the characteristics that make it more relational than non-relational?


Well we effectively have foreign keys between tables / collections. Just none of the benefits.


How do you individually deploy and operate modules, packages, and libraries? The purported benefits of micro services have always been harmonizing with Conway's law. In particular, one of the nice benefits of microservices is that it's harder to "cheat" on the architecture because it likely involves changing things in multiple repos and getting signoff from another team (because again, the code organization resembles the org chart). The high-level architecture is more rigid with all of the pros and cons entailed.

I suppose if you work in an organization where everyone (including management) is very disciplined and respects the high level architecture then this isn't much of a benefit, but I've never had the pleasure of working in such an organization.

That said, I hear all the time about people making a mess with micro services, so I'm sure there's another side, and I haven't managed to figure out yet why these other experiences don't match my own. I've mostly seen micro service architecture as an improvement to my experiences with monoliths (in particular, it seems like it's really hard to do many small, frequent releases with monoliths because the releases inevitably involving collaborating across every team). Maybe I've just never been part of an organization that has done monoliths well.


> But too many are eager to jump into distributed computing without understanding what they are bring into their development workflow and debugging scenarios.

This fallacy is the distributed computing bogeyman.

Just because you peel off a responsibility out of a monolith that does not mean you suddenly have a complex mess. This is a false premise. Think about it: one of the first things to be peeled off a monolith are expensive fire-and-forget background tasks, which more often than not are already idempotent.

Once these expensive background tasks are peeled off, you gets far more manageable and sane system which is far easier to reason about and develop and maintain and run.

Hell, one of the basic principles of microservices is that you should peel off responsibilities that are totally isolated and independent. Why should you be more concerned about the possibility of 10% of your system being down if the alternative is having 100% of your system down? More often than not you don't even bat an eye if you get your frontend calling your backend and half a dozen third-party services. Why should it bother you if your front-end calls two of your own services instead of just one?

I get the concerns about microservicea, but this irrational monolith-mania has no rational basis either.


I'm really thinking this now, I'm responsible for design a reimplantation of a system and for some reason it's taken as a given it will be broken up into services on separate containers, which means network calls which aren't reliable. Which then brings a host of other problems to solve and a load of admin tasks. I'm really thinking I should re design as s monolith. It's a small system anyway.


This is profound insight. Well put.


When I started programming, linking was the only way of packaging software in mainstream computing, if it was so superior we wouldn't have moved away from it.


Microservices architectures are like the Navy using nuclear reactors to power aircraft carriers. It’s really compelling in certain circumstances. But it comes with staggering and hard to understand costs that only make sense at certain scales.


Here is the thing, whatever you write doesn't work without linking, so no we never moved away from it.


There is really something to be said about using C as a teaching language. When you start with C, the entire process is laid bare. And fully agree with earlier observation that microservices are delegating the 'link' phase to a distributed operational/runtime regime. Just being able to see that (conceptually) would steer away a mature team from adopting this approach unless absolutely necessary.

One of the patterns that I have noted in the past 3 decades in the field is that operational complexity is far more accessible than conceptual complexity to the practitioners. Microservices shift complexity from conceptual to operational. With some rare exceptions, most MS designs I've seen were proposed by teams that were incapable of effective conceptual modeling of their domain.


But is linking the right way to couple business logic from two different organisational units?

That is the question being discussed.


Article and parent is not about two different organizations units.

Most of microservice implementation is within a single team. That is the real question being discussed.


Abstractly whether you have a network in between or not both are linking. And whilst softer linking has some advantages it also has some tradeoffs.


Why do you think that something other than linking is superior?


> But is linking the right way to couple business logic from two different organisational units

I think it's not, but I would like to hear other opinions.


I'm not sure why you are being down voted. In context of calling business logic we have moved away from linking.

I guess the context is only implelied in your parent post.

The rise of the network API in the last 20 years has proven it's own benefits. Whether you are calling a monolith of a microservice, it's easier to upgrade the logic without recompiling and re-linking all dependencies.


I think there's too much being conflated to make such a statement.

For example, microservices tend to communicate via strings of bytes (e.g. containing HTTP requests with JSON payloads, or whatever). We could do a similar thing with `void` in C, or `byte[]` in Java, etc.

Languages which support 'separate compilation' only need to recompile modules which have changed; if all of our modules communicate using `void` then only the module containing our change will be recompiled.

It's also easy to share a `void` between modules written in different languages, e.g. using a memory-mapped file, a foreign function interface, etc.

It's also relatively* easy to hook such systems into a network; it introduces headaches regarding disconnection, packet loss, etc. but those are all standard problems with anything networked. The actual logic would work as-is (since all of the required parsing, validation, serialisation, etc. is already there, for shoehorning our data in and out of `void*`).

If these benefits were so clear, I would expect to see compiled applications moving in this direction. Yet instead I see the opposite: stronger, more elaborate contracts between modules (e.g. generic/parametric types, algebraic data types, recursive types, higher-kinded types, existential types, borrow checkers, linear types, dependent types, etc.)


There is one approach Fowler suggested is SacrificialArchitecture. You build your monolith quickly, get to market fit and once you understand service boundaries you move to microservices.

Personally I would like to try Umbrella Projects[1]. You can design it as microservices but deploy and build as monolith. Overhead is lower, and it is easier to figure out right services when in one codebase. It can be easy implemented in other lang/frameworks as well.

[1] https://elixirschool.com/en/lessons/advanced/umbrella-projec...


This! Service boundaries are vital and almost intractable to design up front as you won’t be sure of your systems’ use cases. I’ve worked on numerous micro services systems and all of them were designed in such a way that data loss and unknown error states were mandatory. Micro services seem simpler, but are actually harder than my methodology “as few services as necessary” + bounded contexts within them. Using network partitions to stop spaghetti code just leads to spaghetti network partitions. The discipline to keep this simple and self contained is the important part.


This is why Domain Driven Design should be mandatory reading for all anyone making decisions on Software Architecture.


Unfortunately in actual practise I've encountered significant cargo culting about DDD. Attracts a lot of people with mid level experience who suddenly want to dictate architecture based on theoretical ideals rather than practicalities.

There's no substitute for experience, and specifics of adapting architecture to the context of the problem you're trying to solve.

In one case I wanted to use a technology that actually matches DDD very significantly - but the cargo cultish closedmindedness of the practitioners meant they couldn't even understand how an old idea/tech they hadn't liked or approved was, was actually an implementation of DDD.

The problem there is not DDD, the problem is the people who get closed minded and stuck to their one true way (often without really broad experience to make that judgement call effectively). I've learned that pattern of language of absolutes such as 'should be, mandatory, all' do XXX is often a sign of that kind of cargo cultish thinking.


That's very true, and a problem I've also seen. There are an army of mediocre people who've gone full Cargo Cult with it. Usually comes with a big dose of Uncle Bob: The Bad Parts.


We have exactly the same construct in .Net: a solution file which can contain multiple projects. All of the code is in one place, so you're working with a monolith^. Services are separate and dependencies are checked by the compiler/toolchain - so you design as microservices. Finally deployment is up to you - you can deploy as Microservices, or as one big Monolith.

^ Compile times are fast so this isn't an issue like it can be in other languages.


I like the idea of SacrificialArchitecture. The big downside is to really communicate to management/other departments that it is meant to be a kind of prototype. If it looks good enough and gets paying customers it is hard to find the time to stop, take a step back and re-write


"Nothing is more permanent than a temporary solution"


That's what my company has done. We have a microservice-like architecture, but discourage dependencies on other services. If we need to use one service in another, we use dependency injection (which allows us to switch out the communication protocol later if need be) or talk over a message bus instead. The idea was that we would figure out later which sub-services actually need to be split out based on usage.

AFAIK, we haven't actually split out anything yet after all these years. All of our scale issues have been related to rather lackadaisical DB design within services (such as too much data and using a single table for far too many different purposes), something an arbitrary HTTP barrier erected between services would not have helped with at all.


Does each microservice/module have its own db?


Nope. Everything lives in one big DB, but services are not allowed to touch each others tables directly - communication must be done through service API endpoints.


I have seen micro-service architectures like that. It seems pretty pointless. You should be letting the database do as much of the heavy lifting as you can.


This is what I do in several of my backend Kotlin code bases and it's worked very well. I've also heard it called "distributed monorepo". There are a handful of ways to share code, and the tradeoffs of this approach are very manageable.

The biggest downside I've encountered is that you need to figure out the deployment abstraction and then figure out how that impacts CI/CD. You'll probably do some form of templating YAML, and things like canaries and hotfixes can be a bit trickier than normal.


I've been messing with this using moleculer and node on a project. Feels like a good compromise.


> It can be easy implemented in other lang/frameworks as well.

No it can't, it relies on the runtime being capable of fairly strong isolation between parts of the same project, something that is a famous strength of Erlang. If you try to do the same thing in a language like Python or Ruby with monkeypatching and surprise globals, you'll get into a lot of trouble.


Weird, we do it with Python all the time. Just don't use globals/threadlocals everywhere and you will be good.


As long as you're not using any libraries/frameworks that use them.


I don't think we've ended up with libs that share state in a dangerous way, maybe we got lucky.


Modules are global singletons, class definitions are global singletons. Monkeypatching is less common in Python than Ruby but I'd still consider it a significant risk.


I think the key point of Fowler’s article, which is obscured a little by all the monolith discussions, is that try to start with microservices doesn’t work. He’s claiming this from an empirical point of view -- if you look at successful, working systems, they may use microservices now but they most often started as monoliths.

People are talking about Conway’s law, but the more important one here is Gall’s law: “A complex system that works is invariably found to have evolved from a simple system that worked.”


Conway/Gall duality.


> I would propose alternative is to fix your monolith first. If the team can't rewrite their ball of mud as a new monolith, then what are the chances of successfully rewriting and changing architecture?

A million times this. If you can't chart the path to fixing what you have, you don't understand the problem space well enough.

The most common reply I've heard to this is "but the old one was in [OLD FRAMEWORK] and we will rewrite in [NEW FRAMEWORK OR LANGUAGE]," blaming the problem not on unclear concepts/hacks or shortcuts taken to patch over fuzzy requirements but on purely "technical" tech debt. But it usually takes wayyyyy longer than they expect to actually finish the rewrite... because of all the surprises from not fully understanding it in the first place.

So even if you want the new language or framework, your roadmap isn't complete until you understand the old one well enough.


This is the correct way to do things but, unfortunately, like I wrote in my other comment, this knowledge only comes after one has gone through a few of these "transformational" changes. I've seen this happen a few times and monolith vs micro services is such a done deal that it's very difficult to argue against so, even with experience, you will always fail if you try and go against the grain :)


That’s what I am always saying. If you can’t manage libraries you will fail at microservices too, just harder.

I am working on a project right now that was designed as microservices from the start. It’s really hard to change design when every change impacts several independent components. It seems to me that microservices are good for very stable and well understood use cases but evolving a design with microservices can be painful.


Especially if in new requirement, microservice A need to interact with service C which originally accessed via B. You can either:

* Make additional changes in B which also takes resources, times and introducing overhead + point of failure, or

* Make A interact directly with C which breaks the boundary


We are just moving complexity from one place to another. And sometimes few things get lost in this transformation.


I've noticed after many optimization project in my investment bank that most architectures are fine, and rarely a source of or solution to problems.

The source of most of our problems is always that we started too small to understand the implication of our laziness at the start (I'll loop over this thing - oops it's now a nested loop with 1 million elements because a guy 3 years ago made it nested to fix something and the business grew). Most times, we simply have to profile / fix / profile / fix until we reach the sub millisecond. Then we can discuss strategic architecture.

Interestingly most of the architecture problem we actually had to solve were because someone 20 years ago chose an event-based micro service architecture that could not scale once we reach millions upon millions of event and has no simple stupid way to query state but to replay in every location the entire stream. Every location means also the C# desktop application 1000 users use. In this case yes, we change the architecture to have basic indexed search somehow with a queriable state rather than a reconstructed one client-side.


Another issue I've seen is that people push all the problems onto the monolith even if they're external - one place had a Perl monolith (which was bad, sure) but their main issue was an overloaded database which could have been addressed with moving some queries out of the (awful homegrown) ORM and using e.g. signed session cookies instead of every request causing a session table hit.


> If the team can't rewrite their ball of mud as a new monolith, then what are the chances of successfully rewriting and changing architecture?

Well sometimes there are very complex subsystems in the monolith and it's easier to create a completely new microservices out of that instead of trying to rewrite the existing code in the monolith.

We had done so successfully by creating a new payment microservices with stripe integration and then just route every payment that way. Doing the same in a huge pile of perl mess has been assessed as (nearly) impossible by the core developer team without any doubts.

But I have to admit that the full monolith code base is in a maintenance mode beyond repair, only bug & security fixes are accepted at this point in time. Feature development is not longer a viable option for this codebase.


> I would propose alternative is to fix your monolith first. If the team can't rewrite their ball of mud as a new monolith, then what are the chances of successfully rewriting and changing architecture?

Often, the problems with the monolith are

* different parts of the monolith's functionality need to scale independently, * requirements for different parts of the monolith change with different velocity, * the monolith team is too large and it's becoming difficult to build, test, and deploy everyone's potentially conflicting changes at a regular cadence, with bugs in one part of the code blocking deploying changes to other parts of the code.

If you don't have one of these problems, you probably don't need to break off microservices, and just fixing the monolith probably makes sense.


> the monolith team is too large and it's becoming difficult to build, test, and deploy everyone's potentially conflicting changes at a regular cadence, with bugs in one part of the code blocking deploying changes to other parts of the code.

This is the only argument I'm ever really sold on for an SOA. I wonder if service:engineer is a ratio that could serve as a heuristic for a technical organization's health? I know that there are obvious counter-examples (shopify looks like they're doing just fine), but in other orgs having that ratio be too high or too low could be a warning sign that change is needed.


> different parts of the monolith's functionality need to scale independently,

I hear this a lot. Can you give me an example? Can't I just add more pods and the "parts" that need more will now have more?

Agreed on the other reasons (although ideally the entire monolith should be CI/CD so release cadence is irrelevant, but life isn't perfect)


So the alternative is to have a monolith deployed with different "identities" based on how it's configured. So you configure and scale out different groups of the monolith code independently, to satisfy different roles and scale up or down as needed.

However, sometimes the resources needed to load at start up can be drastically different, leading to different memory requirements. Or different libraries that could conflict and can also impact disk and memory requirements. For a large monolith, this can be significant.

So at what point do you go from different "configuration", to where enough has changed it's a truly different service? The dividing line between code and configuration can be very fluid.

> the entire monolith should be CI/CD so release cadence is irrelevant

But if one module has a bug and is failing testing, or features partially completed, it blocks releasing all the other code in the monolith that is working.


>For a large monolith, this can be significant.

If we are at this point we aren't really talking about microservices though. More like taking a gigantic monolith and breaking it up into a handful of macroservices.


I really like how you've worded this and agree.

Some of this is driven by well documented cognitive biases, such as the availabily heuristic which you've identified, but there are so many.

This is a good video on it https://birgitta.info/redefining-confidence/


> I would propose alternative is to fix your monolith first

I'd like the idea of vertical slices for this: once you've done this, then refactor to microservices will be much easier.


I'm living this right now. A big, fat python 2 monolith maintained for 10 years. Everyone hates it, but it actually works perfectly fine.


Hahaha. I have been saying the same for years but have either been punished by my managers or mercilessly downvoted on HN.

Somehow mentioning that microservices might not be perfect solution in every case triggers a lot of people.

I have actually helped save at least one project in a huge bank which got rolled from 140 services into one. The team got also scaled down to third of its size but was able to work on this monolithic application way more efficiently. Also reliability, which was huge issue before the change, improved dramatically.

When I joined, I have observed 6 consecutive failed deployments. Each took entire week to prepare and entire weekend to execute (with something like 40 people on bridge call).

When I left I have observed 50 consecutive successful deployments, each requiring 1h to prepare (basically meeting to discuss and approve the change) and 2h of a single engineer to prepare and execute using automation.

Most projects absolutely don't need microservices.

Breaking anything apart brings inefficiencies of having to manage multiple things. Your people now spend time managing applications rather than writing actual business logic. You have to have really mature process to bring those inefficiencies down.

If you want to "do microservices" you have to precisely know what kind of benefits you are after. Because the benefits better be higher than the costs or you are just sabotaging your project.

There are actually ways to manage huge monolithic application that don't require each team to have their own repository, ci/cd, binary, etc.

How do you think things like Excel or PhotoShop have been developed? It is certainly too large for a single team to handle.


I think my biggest gripe with orgs that adopt microservices is they don't build out any of the testing, CI/CD, monitoring and debugging workflows to support it. It goes from shitty, slow monolithic application that super-pro can debug in a few minutes to.. Slow, shitty disparate services that are managed by different teams who don't share anything, suddenly you've got a Cuckoo's Egg situation where 1 guy needs to get access to all the things to find out what the fuck is happening. Or you just accept it's shitty and slow, and pay a consultancy to rebuild it in New Thing 2.0 in 8 years when accounting forget about the last 3 rebuilds.


That is EXACTLY what I have observed, time and time again.

If you have trouble managing ONE application what makes you think you will be better at managing multiple?

Also, running distributed system is way more complicated than having all logic in a single application. Ideally you want to delay switching to distributed system until it is inevitable that you are not going to be able to fulfill the demand using monolithic service.

If your application has problems, don't move to microservices. Remove all unnecessary tasks and let your team focus on solving the problems first and then automate all your development, testing, deployment and maintenance processes.

Or call me and I can help you diagnose and plan:)


Monoliths are also distributed systems and will run on multiple hosts, most probably co-ordinating on some sort of state (and that state management will need to take care of concurrency, consistency). Some hosts will go down. Service traffic will increase.

I understand your point. You are using distributed in the sense of "how is one big work distributed", you probably also hate overly "Object Oriented code" for similar reasons.

But distributed systems is a well understood thing in the industry. If I call you and you tell me this, then you're directly responsible for hurting how successful I would be by giving me a misleading sense of what a distributed systems is.


> But distributed systems is a well understood thing in the industry.

Wait, what?

Distributed systems are one of the most active areas on CS currently. That's the opposite of "well understood".

It's true that most systems people create are required to be distributed. But they are coordinated by a single database layer that satisfies approximately all the requirements. What remains is an atomic facade that developers can write as if their clients were the only one. There is a huge difference between that and a microservices architecture.


Distributed systems are well understood though. We have a lot of really useful theoretical primitives, and a bunch of examples of why implementing them is hard. It doesn't make the problems easier, but it's an area that as you say, has a ton of active research. Most engineers writing web applications aren't breaking new ground in distributed systems - they're using their judgement to choose among tradeoffs.


Well understood areas do not have a lot of active research. Research aims exactly to understand things better, and people always try to focus it on areas where there are many things to understand better.

Failure modes in distributed systems are understood reasonably well, but solving those failures is not, and the theoretical primitives are way far from universal at this point. (And yes, hard too, where "hard" means more "generalize badly" than hard to implement, as the later can be solved by reusing libraries.)

The problem is that once you distribute your data into microservices, the distance from well researched, solved ground and unexplored ground that even researchers don't dare go is extremely thin and many developers don't know how to tell the difference.


Correct. That doesn't make monolithic systems "not distributed".

Secondly, I don't know why you say "distributed systems are an active area of research" and use this as some sort of retort.

If I say "Is a monolithic app running on two separate hosts a distributed system or not", if your answer is "We don't know, it's an active area of research" or "It's not. Only microservices are distributed"


Hum... I don't think you understood what I said.

Most of what people call monolithic systems are indeed distributed. There are usually explicit requirements for them to be distributed, so it's not up to the developer.

But ACID databases provide an island of well understood behavior on the hostile area of distributed systems, and most of those programs can do with just an ACID database and no further communication. (Now, whether your database is really ACID is another can of worms.)


Different kinds of distributed systems have wildly different complexity in possible fun that the distributed nature can cause. If you have a replicated set of monoliths, you typically have fewer exciting modes of behaviour and failures.

Consider how many unique communciation graph edges and multi hop causal chains of effects you have you have in a typical microservice system vs having replicated copies of the monolith running, not to mention the several reimplementations or slightly varying versions and behaviours of same.


I don't even consider replicated set of monolyths as a distributed system.

If you've done your work correctly you get almost no distributed system problems. For example, you might be pinning your users to a particular app server or maybe you use Kafka and it is Kafka broker that decides which backend node gets which topic partition to process.

The only thing you need then is to properly talk to your database (app server talking to database is still distributed system!), use database transactions or maybe use optimistic locking.

The fun starts when you have your transaction spread over multiple services and sometimes more than one hop from the root of the transaction.


> Monoliths are also distributed systems and will run on multiple hosts

... not necessarily. Although the big SPOF monolith has gone out of fashion, do not underestimate the throughput possible from one single very fast server.


Well, no matter how fast a single server is, it can't keep getting faster.

You might shoot yourself in the foot by optimizing only for single servers because eventually you'll need horizontal scaling and it's better to think about it in the beginning of your architecture.


> eventually you'll need horizontal scaling

This is far from inevitable. There are tons of systems which never grow that much - not everyone works at a growth-oriented startup - or do so in ways which aren’t obvious when initially designing it. Given how easily you can get massive servers these days you can also buy yourself a lot of margin for one developer’s salary part time.


Whatever happened to premature optimization being bad?


Even in a contrived situation where you have a strict cache locality constraint for performance reasons or something, you'd still want to have at least a second host for failover. Now you have a distributed system and a service discovery problem!


So I actually find that microservices should actually help tremendously here? Service A starts throwing 500s. Service A has a bunch of well defined API calls it makes with known, and logged, requests and responses. These responses should be validated on the way in and produce 400s if they aren't well formed. Most 500s IMHO result from the validations not catching all corner cases, or not handling the downstream errors properly. But in general it should be relatively easy to track down why one, or a series of calls failed.

I also find that by having separate distinct services, it puts up a lot of friction to scope creep in that service and also avoids side effect problems- IE you made this call, and little did you know this updated state somewhere you completely didn't expect and now touching this area is considered off limits, or at least scary because it has tentacles in so many different places. Eventually this will absolutely happen IME. No of course not on your team, you are better than that, but eventually teams change, this is now handled by the offshore or other B/C team, or a tyrant manager takes over for a year or two before that is obsessed with hitting the date, hacks or not, etc...

But I guess an absolutely critical key to that is having a logging/monitoring/testing/tracing workflow built in. Frameworks can help, Hapi.js makes a lot of this stuff a core concept for example. This is table stakes to be doing "micro" services though and any team that doesn't realize that has no business going near them. Based on the comments here though ignorance around this for teams embracing microservices might be more common than I had imagined.


> So I actually find that microservices should actually help tremendously here? Service A starts throwing 500s. Service A has a bunch of well defined API calls it makes with known, and logged, requests and responses.

This isn’t wrong - although there is a reasonable concern about expanding interconnection problems – but I think there’s commonly a misattribution problem in these discussions: a team which can produce clean microservices by definition has a good handle on architecture, ownership, quality, business understanding, etc. and would almost certainly bring those same traits to bear successfully for a more monolithic architecture, too. A specific case of this is when people successfully replace an old system with a new one and credit new languages, methodology, etc. more than better understanding of the problem, which is usually the biggest single factor.

Fundamentally, I like microservices (scaling & security boundaries) but I think anything trendy encounters the silver bullet problem where people love the idea that there’s this one weird trick they can do rather than invest in the harder problems of culture, training, managing features versus technical debt, etc. Adopting microservices doesn’t mean anyone has to acknowledge that the way they were managing projects wasn’t effective.


I think this nails it. It's not the concept's fault if the implementation is half-assed.


> It's not the concept's fault if the implementation is half-assed.

worse than that. Case in point: a major US national bank interview for a developer position. they talk about moving toward microservices. a simple question since they mention microservices: will someone be able to access and modify data data underneath my service without going through my service? the uneasy answer: yes, it does and will happen.

that's the end of the interview as far as I am concerned.

If you can access and modify the underlying data store from my micro service, not only it isn't a micro service, it isn't much of a service oriented architecture. this isn't me being a purist, just being practical. If we need to coordinate with five different teams to change the internal implementation of my "microservice", what is the point of doing service oriented architecture? all the downside with zero upside?


Thats a distinction worth following up:

My take is there are 3 kinds of micro-service

* Service Ownership of data - if you want to change customer name, there is only one place to go.

* Worker services - they don't really own data, they process something - usually requesting from golden sources. Just worker bees but the thing they do (send out marketing emails to that persons name) is not done by anyone else

* Everything else is borked


sorry, that’s a distributed monolith. if everyone can mess with the data directly you might as well keep it in one service


LOL I'm in talks of migrating microservices into a monolith purely because there is so much overhead at managing it. You need at least a couple of people to keep it in place. And the latter means the company is prepared to kill the product even when there are actual customers plus new ones are coming.

Microservices make sense when you have millions of users and there is a need to quickly scale horizontally. Or when you have a zillion of developers which probably means that your product is huge. Or when you are building a global service from the get go and get funded by a VC.


I can understand Résumé-Driven Development. Our industry is famous for its "Must have (X+2) years of (X year old technology)" job requirements.


A sister team at work (same manager, separate sprints) split a reasonable component into three microservices. One with business logic, one which talks to an external service, and one which presents an interface to outside clients. They then came to own the external service as well. So now they have 4 micro services when one or two would have been plenty. I don't hesitate to poke fun when that inevitably brings them frustration. Another team we work with has their infrastructure split across a half-dozen or more microservices, and then they use those names in design discussions as though we should have any idea which sub component they're talking about. Microservices are a tool that fits some situations, but they can be used so irresponsibly.


> There are actually ways to manage huge monolithic application that don't require each team to have their own repository, ci/cd, binary, etc.

Would be interested to hear about some of these.


The basic technique is to use modular architecture.

You divide your application into separate problems each represented by one or more modules. You create API for these modules to talk to each other.

You also create some project-wide guidelines for application architecture, so that the modules coexist as good neighbors.

You then have separate teams responsible for one or more modules.

If your application is large enough you might consider building some additional internal framework, for example plugin mechanism.

For example, if your application is an imaginary banking system that takes care of users' accounts, transactions and products they have, you might have some base framework (which is flows of data in the application, events like pre/post date change, etc.) and then you might have different products developed to subscribe to those flows of data or events and act upon the rest of the system through internal APIs.


Perfectly described! People forget or don't know that there are many different architecture across monolithic systems, or that there are something in between monolith and microservice.


A monorepo I guess. It looks like Microsoft is using a monorepo for their office applications (https://rushjs.io/) and you could do the same thing for node.js using yarn workspaces/lerna/rush.

Each "Microservice" could live in a separate package which you can import and bundle into single executable.

Elixir has "Umbrella Projects": https://elixirschool.com/en/lessons/advanced/umbrella-projec...

Rust/Cargo has workspaces: https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html


Well for starters, each component's API can be published and versioned separately from its implementation.

The build of a component would only have access to the API's of the other components (and this can include not having knowledge of the container it runs in).

The implementation can then change rapidly, with the API that the other teams develop against moving more slowly.

Even so, code reviews can be critical. The things to look out for (and block if possible) are hidden or poorly defined parameters like database connections/transactions, thread local storage and general bag parameters.

In some languages dependency injection should be useful here. Unfortunately DI tools like Spring can actually expose the internals of components, introduce container based hidden parameters and usually end up being a versioned dependency of every component.


> I have been saying the same for years but have either been punished by my managers or mercilessly downvoted on HN.

Ex Amazon SDE here. I've been saying many times that Amazon tends to have the right granularity of services: roughly 1 or 2 services for a team.

A team that maintains the service in production autonomously and deploys updates without having to sync up with other teams.

[Disclaimer: I'm not talking about AWS]


I don't think you can downvote on HackerNews. That's reddit.


You can, you just need 501 karma first. Seems like someone wanted to prove you wrong since you've been downvoted.


Haha seems a few people wanted to prove me wrong :) Didn't know that!


You need 500 karma for the downvote button to appear: https://jacquesmattheij.com/the-unofficial-hn-faq/#karma


The problem with creating a dichotomy between "monolith" and "microservices" is that... well... it doesn't make any sense. You describe your "microservices" and I'll explain how it's really just a monolith.

"So let me get this straight, you have a Billing package and an Ordering package. How are these organized again?"

-> "We have a distributed system."

"So you you compile them together and deploy them horizontally across many machines? Like a distributed monolith?"

-> "No we use microservices. The two packages are compiled and deployed separately so there is no code dependencies between them."

"Okay. So if I understand this right, you develop each package separately but they both access the same database?"

-> "No no no no! Microservices are not just about code dependency management, they are also about data ownership. Each package is developed and deployed separately and also manages it own database."

"Ah, so you have two monoliths?"

-> "..."

You see the problem is that the above terms are too broad to describe anything meaningful about a system. And usually "monolith" is used to described the dependencies between code whereas "microservices" is used to describe dependencies about data. It turns out there is a lot of overlap.

Design and implement your systems according to your use-case and stop worrying about how to label them.


> "Ah, so you have two monoliths?"

Made me snort.


Divide and conquer is all well and good, but it often ends up being:

A: We have a problem. I know, we'll use divide and conquer! Runs off.

B: Uh, that's actually Banach-Tarski^W^Wmicroservices? Wait, come back!

later

B: Now we have two problems, contorted to fit into a spherical subspace^W^WIO monad, and a dependency on the axiom of choice^W^W^Wasyncio library.

A: Wait, why are there monads? This is Javascript!

B: There are always monads, it's just a question of whether you type system is overengineered anough to model them.


I keep saying this every time this topic comes up, have a look at elixir/erlang. The OTP library gives some great reusable patterns (and elixir project is adding new ones[1]) out of the box. You're basically developing a single codebase microservice project which feels like a monolith. You can spread out on multiple machines easily if you need to, the introspection tooling is better than anything you can buy right now, it's amazing.

Things can get tricky if you need to spread out to hundreds of machines, but 99%+ of projects wont get to that scale.

[1] https://elixir-lang.org/blog/2016/07/14/announcing-genstage/


How does versioning work? Is it required that all VM instances run the same version?


I'm gonna assume you're talking about versions of application code. Obviously it's not a trivial problem and you have to keep it in mind while architecting your application, but Hot Code reloading is a well supported thing [1]

https://blog.appsignal.com/2018/10/16/elixir-alchemy-hot-cod...


And Erlang runtime) is used in massive telephone switches, famously AXD301 which is claimed to have uptime percentage of 99.9999999% over 20 years.


It's unfortunate that this number stills gets thrown around: https://stackoverflow.com/a/26447543


Experiences with the AXD301 suggest that “five nines” availability, downtime for software upgrades included, is a more realistic assessment. For nonstop operations, you need multiple computers, redundant power supplies, multiple network interfaces and reliable networks, cooling systems that never fail, and cables that system administrators cannot trip over, not to mention engineers who are well practiced in their maintenance skills. Considering that this target has been achieved at a fraction of the effort that would have been needed in a conventional programming language, it is still something to be very proud of.

- from Erlang Programming by Simon Thompson, Chapter 1


I see services (not micro) as an organizational solution more than a technical one. Multiple teams working on the same service tend to have coordination problems. From this perspective starting with a monolith as you start with one team seems natural.

Now the trend of breaking things up for the sake of it - going micro - seems to benefit the cloud, consultancy and solution providers more than anybody else. Orchestrating infrastructure, deployments, dependencies, monitoring, logging, etc, goes from "scp .tar.gz" to rocket science fast as the number of services grows to tens and hundreds.

In the end the only way to truly simplify a product's development and maintenance is to reduce its scope. Moving complexity from inside the code to the pipelines solves nothing.


Yes, that's the "Conway's Law" argument; microservices exist to replicate the team structure in the code, not the other way round.


I work for a small company that uses microservices architecture. The product is simple where a user registers, enters preferences, selects from courses and submits a form. A monolith would be doable with 5 good engineers. Instead we have about 50 engineers+QA releasing bug filled code.

The company's design philosophy is based more on what is fashionable that what is suitable.


There’s too much funding in the industry. Hilarious to think that if interest rates rose we would have fewer micro service implementations.


> we have about 50 engineers

The plan worked.


Clearly something is working if you've got 50 engineers turning out bug-filled code and systems, and yet the business keeps on living.


> and yet the business keeps on living

Sure, they just change the business plan. Move into as many new markets as possible, focus more on new sales than maintaining customers, ship more features/products than you can maintain, etc.

If the company gets twice as big but revenue/growth remains the same, that's a sinking ship.


Anec-contra-data:

I worked for a while at a small startup, initially as the sole developer and eventually growing to 3. We had an architecture I would describe as at least being "microservices-esque": a single large "monolithic" backend service, with a number of smaller consumer services, webapps, UIs.

The distinction between microservices and monoliths may be debatable but I believe monoliths as described by Martin Fowler typically involve your primary service logic and your UI coexisting. I've found that splitting these at least is usually fruitful.

I think this approach is sometimes called "fan-out", though I've seen that term used to describe other different things too; namely the opposite where there's a single monolithic UI layer fanning out to many micro upstreams.

TL;DR: Fowler's opening 2 bullet points seem likely to be a false dichotomy as he's force-classifying all architectures into two buckets.


We currently have this approach. One graphql backend-for-frontend server in front of an arbitrary number of BE servers. The UI always has a consistent interface and the BE can focus on its business logic instead of how the FE should display it. The two layers can change fairly independently. We're still early in the project, but in theory I love this architecture and would use it again.


With this in mind, if I were to start a new project today (using Go, which is all I use anymore). I would segment my code inside of a monolith using interfaces. This would incur a very slight initial cost in the organization of the codebase prior to building. As your usage goes up and the need to scale parts of it increases, you could swap these interfaces out little by little by reimplementing the interface to contact your new external microservice.

I've worked at several companies where microservices were necessary, and I can't believe how clunky they are to work with. I feel like in 2021 I should be able to write a monolith that is capable of horizontally scaling parts of itself.


Exactly with this vision I started beginning of 2020 our Go monolithic backend. Currently, one year later with a few people developing it, it's working very well. And thank god I resisted and pushed back against people who were advocating to split off parts into AWS lambdas, misguided by confusing "physical" separation for logic separation.

And regarding the cost you mention, I don't think there is any. Yes, the components only know each other via their interfaces, not their concrete types. Yes, you need to define these interfaces. But that's in essence plain old school dependency injection. You'd do that anyway for testing, right?


I don't see a lot of cost associated with the duck typing-ish approach go takes with interfaces. Instead of passing an implementation directly (or even having it in the first place) you can develop with an interface from the get go. Makes for easier tests too.

In my experience the cost is lower if you write code the way you described (and you don't need to plan ahead as much)


This is the way. Interfaces for modularity and leveraging nested packages within the monolith to keep code decoupled and organized.


Yep, that is the right approach, modular code.


My experience is that micro services are a method to enforce modularity. You don’t have any choice other than talking over interfaces (assuming you don’t cheat by sharing a database). Keeping things modular in a monolith requires discipline that is enforced by reviewing everyone else’s code.


As someone who is new to Go and learning actively, do you have an example of how this would look?


Sure. First you'll need to familiarize yourself with interfaces in go. This link explains them well.

https://gobyexample.com/interfaces

So a struct satisfies an interface if it implements all of the methods of the interface. The example you see in the above link is a monolithic design. Everything only exists in one place and will be deployed together.

So if the functions for circle were getting hit really hard and we wanted to split those out into their own microservice so that we could scale up and down based on traffic, it would look something like this. https://play.golang.org/p/WOp0RL-pVg3

There we have externalCircle which satisfies the interface the same way circle did, but externalCircle makes http calls to an external service. That code won't run because it's missing a thing or two, but it shows the concept.


There's of course some nuance here - for example the original interface was designed to never error, assuming calculation was local in a way. The new implementation of that interface that makes remove calls may now fail - and basically either has to log and return an arbitrary float, or panic, because the interface doesn't allow for propagating errors.


I'm embarrassed to say that I recently built microservices first, and am now kicking myself as I merge them back in to a monolith.


If you don't mind telling this story, why did you end up merging them back in? I would have thought that needlessly making microservices is bad but not bad enough to justify merging back.


It kind of depends how well you chose the service boundaries. If you chose well, there's not a huge downside, just a bit of extra overhead. If you chose poorly, it can make it practically impossible to have a reliable app with acceptable performance. If you start out with microservices before you understand the problem space, you are much more likely to choose poorly.


It depends how many you have. Microservices can make things like updating dependencies a massive pain (now you have to do it 10 times)


You can still build a "monolith" but in a very modular way. Can't scale independently like microservices, but what if you don't need scale! Compile times are higher but what if you don't need to compile that much code! One error can bring down all "monolith services" but what if your app is not that big!

I'd wager, something like 90% of software projects in companies can just get by with monoliths.

You know...there are monoliths that are well architected, and then there are monoliths that are developed for job security.


Never really got this "One error can bring down all "monolith services"".

Can you share one example where an error couldn't be handled gracefully in a monolith?

Say if my shipping service fails for some reason, I can still fallback to a default cost in a monolith.


Agreed. I've found propogating an error across multiple services to be a much harder problem than handling it with return values/exceptions in a monolith.


>Can't scale independently like microservices, but what if you don't need scale!

What do you mean? Aside from deployed codebase, I don't quite see it - if you need memory, allocate - start small/empty for any datastructure. If managed/gc setup delallocating is inherent, so no special case about freeing memory. Don't create unnecessary threads - but even then dormant threads are very cheap nowadays. There you go - it scales vertically nicely, make sure the application can scale horizontally (say, partitioning by user) and you have all the benefits with close to no drawbacks


We had very similar settings where our codebase was built modular but deployed as monolith. At once we wanted to scale just the Queue processor module out of. We did deploy complete monolith and only enabled Queue processor on other machine. It did add up to extra memory & disk usage. But in the end it helped our team to deploy without going through microservice architecture.


You're actually right. Just don't call the portions of the app and you can horizontally scale too.


The most amazing thing I saw with a microservices first approach was someone building a betting system with microservices. So if you made a bet, one microservice would make an http call to another to request your balance, make a decision based on that and then move the funds. No locks, no transactions, no nothing. I wasn't sure whether to laugh or cry. Fortunately I got there in time before this thing was launched...


But how monolithic were our monoliths anyway? Take your bog-standard ecommerce solution, built with a LAMP stack.

Okay, we've got a php process. Let's put our business logic there.

Okay, I need a database. With current cloud practices, that's probably going to be deployed in a different machine. We've got our business logic service and our datastore separated by a network bound from the get go.

Okay this is not very performant, I need to cache some data. Let's add a redis instance in our cloud provider. I guess this we'll be our cache service.

Okay we need to do full-text search, let's add an elasticsearch cluster through our cloud provider.

Okay I need to store users. We can built that into our business logic core, of course, but screw that. We are using Auth0. I guess our "user service" is distributed now. Okay, my php process can't keep up with all this requests, let's scale this horizontally by deploying multiple php processes behind a load balancer!

Okay, now I need to do some batch processing after hours. I could add a cron job for that, but I don't want to deal with everything needed for retrying/reprocessing/failure handling plus now that it's a multi-instance deployment it's not even clear which process should do this! Let's put this work in a Lambda from our cloud provider.

So until now, we had a star architecture where everything talked to our business core. But adding this lambda that will talk directly to other services without consulting the core we've lost that.

Now, stripping out the business core into its constituent parts doesn't sound so strange, does it?


Yeah, that's a terrible idea, so don't do something like that. You don't need to break up your monolith to run cronjobs.

My setup: I have async worker running alongside my application. Same codebase, same environment, same monolith, just instead of accepting HTTP requests it pops tasks off a queue. The framework handles retrying/reprocessing/failure.

To run cron tasks, I have CloudWatch Events -> SNS -> HTTP endpoint in the monolith which pushes the cron task into the queue.

And this isn't some unusual setup. This is something that people have been dealing with for a very long time, long before Lambda came into existence. Most web frameworks have async tasks built-in or an integration with a framework that can do async tasks.


Now you need to upgrade. Your HTTP request handling takes like maybe 2s at most bring up a new instance and drain the old one. But you have one daily cron job that takes eight hours, and another that only takes 1s but needs to run hourly.

Now you need to upgrade to a new version, which requires an incompatible DB migration affecting the request handling and the hourly job. The long job started at 2PM. How do you do the upgrade, without someone needing to stay late?

(If it's e.g. PHP this gets even worse due to lazy package loading, you don't need a DB migration just a changed import path.)

I'm not a fan of microservices, especially the ridiculous factoring I see in lambda-style approaches today. But one of our biggest PITAs was a "horizontally scalable monolith" that after 7 years has accrued so so many responsibilities it's either impossible to find a time slot for an upgrade with zero/low downtime, or we have to plan & develop dealing with shared storage where half may upgrade one day and half the next - all the operational overhead of SOA without the benefits.


IMO this is an unnecessary dichotomy that we're currently forced to deal with, because we don't yet have a good solution for programming the "cloud" the same way you program a for a single computer, and that ends up leaking into all the consequent decisions of modularisation, performance, code management, deployment, team organisation, etc.

(My holy grail would be programming for distributed environments as if you had one giant single-thread computer, and function calls could arbitrarily happen over IO or not, then concerns would boil down to code organisation first. I believe Erlang's OTP model or the way some features of Google Cloud Platform are organised gets us closer to this ideal, but we're not quite there yet.)


> programming for distributed environments as if you had one giant single-thread computer, and function calls could arbitrarily happen over IO or not

Ex-Amazon SDE here. Message-passing libs like 0mq tried to push this idea.

They never became very popular internally because passing data between green threads vs OS threads vs processes vs hosts vs datacenters is never the same thing.

Latencies, bandwidths and probability of loss are incredibly different. Also the variance of such dimensions. (Not to mention security and legal requirements)

Furthermore, you cannot have LARGE applications autonomously move across networks without risks of cascading failures due to bottlenecks taking down whole datacenters.

Often you want to control how your application behaves on a network. This is also a reason why OTP (in Erlang or implemented in other languages) is not popular internally.


I don't dispute one might want to have this control. At the same time, I don't see a theoretical reason one can't have the same programming model, and choosing what computation runs local vs. distributed is just a configuration. A function call or a remote call won't change the domain logic, it's an implementation detail - but today, this leaks into all kinds of decisions, starting from microservices and working backwards to glue everything together again.


The programming model is fundamentally different between local vs distributed. Imagine if every local function call in your monolith significantly increased in error rate and latency. Your optimal algorithms would change - you can't just loop over a network call like you can with a local function call. That's what happens when you take a local invocation and put it across a network boundary. The probability of things going wrong just goes up.


I'm not proposing to code like a monolith - but I don't see a reason there can't be a programming model that unifies that.

APIs like Spark, for instance, make it largely transparent - processes are scheduled by the driver in nodes w/ hot data caches, and processes that fail are transparently retried; and yet this doesn't impact how you write your query. Effects systems are another thing can help us reason around i/o as function calls and handle the fail-ability.

I would bet it's a matter of lacking good accepted patterns instead of a theoretical impossibility.


It's not a theoretical impossibility: you can wrap every function call in a monad today if you want. It's just a practical nightmare.


It's way more than "just a configuration".

> A function call or a remote call won't change the domain logic

Understanding performance and minimizing failure modes and their impact on larger systems makes for a whole career as "SRE". Making remote calls all over a codebase creates behaviors that are practically impossible to debug or optimize. But the blocker is the network impact of large applications and the emergence of cascading failures.


This point is what so many miss. The conceptual decomposition seems nice, but the physical implications of that decomposition are horrible. The further you split a function caller from its caller, physically, the more chaos you're asking for. It's quite simple to see from a physical perspective but from an abstract perspective it's all lines and boxes.


The difference between separate threads and separate machines should not be abstracted away, but managing them as part of the application through compile-time magic would be cool.


One could use something like Rust feature macros, plus a tool on top of Cargo that automatically picks the set of features required by each deployment.


Sorta the purpose of the Actor model but that has other complexities.


This is a great summary of the problem, upvoted.


Have you seen Unison language? They are positioned as a solution to what you are describing.


Most common architecture for SAAS apps is a frontend served from cdn + frontends served by app stores, an api server behind app load balancer (e.g nginx), a database cluster, and a bunch of async long lived cron jobs + on demand jobs.

In that architecture you don’t need that many Microservices. The api server is prolly the most important bit and can be served as a monolith.

Many 10B+ companies use this architecture. It works well and does the job. Mono/Microservices is really about separation of concerns. Separation of concerns mostly around what you want to deploy as a unit, redundancy and what should be scaled as a unit.

Agree with author. Start with one chunk, and split when you feel the pain and need that separation of concern. Fewer pieces are easier to reason about.


I’ve seen Separation of Concerns used as a “killer argument” in many a planning meeting, but reminder it needs to be weighed against the cost of being (un)able to test and validate the entire system atomically.


We did the journey both ways:

Monolith -> Microservices -> Monolith

The first monolith was so bad that microservices made sense at that point in time. The current monolith is an exemplar of why we do not use microservices anymore. Everything about not fucking with wire protocols or distributed anything is 100x more productive than the alternatives.

Deciding you need to split your code base up into perfectly-isolated little boxes tells me you probably have a development team full of children who cannot work together on a cohesive software architecture that everyone can agree upon.


There is no replacement for discipline. If you cannot structure your modules well, what makes you think that your microservice spaghetti would look better?


In my opinion it will become unusable and unmaintainable faster with microservice spaghetti. With monolith you can get by with bad design longer. But anyways, everything boils down to good structure as you said.


It's a means to shift the problem to Someone Else; I've got my codebase, it is mine and it is perfect. Integrating with it is someone else's problem.


Not even just microservices. Even just unnecessary layers or tiers.

I'm working on a project right now trying to spin up the infrastructure in the public cloud for a simple application that some bored architecture astronaut decided would be better if it had an "API tier". No specific reason. It should just have tiers. Like a cake.

Did you know Azure's App Service PaaS offering introduces up to 8-15ms of latency for HTTPS API calls? I know that now. Believe, me I know that all too well.

To put things in perspective, back in the days I cut my teeth writing "4K demos" that would be added to zip files on dialup bulletin boards. In those days, I carefully weighed the pros and cons of each function call, because the overheads of pushing those registers onto the stack felt unnecessarily heavyweight for something that's used only once or maybe twice.

These days, in the era of 5 GHz CPUs with dozens of cores, developers are perfectly happy to accept RPC call overheads comparable to mechanical drive seek times.

I can hear the crunching noises now...


I'm surprised that he doesn't mention the Monorepo + Microservice approach. I've found that most of the pain, confusion and bugs when dealing with microservice architectures has to do with the basic overhead of repo management, not the actual microservices themselves.

If you have a monorepo, and you make a change to a schema (backwards compatible or not, intentional or not), it's a lot easier to catch that quickly with a test rather than having a build pipeline pull in dozens of repos to do integration tests.


I've only had one experience with monorepo + microservice, and it wasn't to my taste. As the volume of code sharing increased, the whole thing became ever more monolithic and difficult to change.

Also, if you have a bunch of services all sharing the same schema, you aren't really doing microservices. Not because of some No True Scotsman, "Best practices or GTFO," sentiment, but because that design question, "Do services only communicate over formal interfaces, or do we allow back channels?", is the defining distinction between SOA and microservices. The whole point of the microservices idea was to try and minimize or eliminate those sorts of tight coupling traps.


The article doesn’t have any connection to monorepo / polyrepo axis. You can use either repo structure for a monolith application, or use either repo structure for separated microservices. The article is just not related to repo structure in any way.


The article isn't related to repo structure, however repo structure can relate to the core issue of how to share code and maintain domain boundaries - search for the "umbrella" thread elsewhere in these comments.


Just because repo structure can relate doesn’t mean that such a relationship involves any properties of monorepo structure vs polyrepo structure.

Any messy code organization or dependency issue you can have in a monorepo, you can also have in polyrepos. Any automation or central controllership you can have in a monorepo, you can also have in polyrepos. Any scale-out chores or overhead work you can have with polyrepos, you can also have with monorepo.

The issues of monolith / poorly coupled dependency applications vs microservices has absolutely nothing to do with monorepo vs polyrepos. Neither option makes any class of tooling easier / harder to build or maintain. Neither makes conceptual organization easier / harder to get right, neither makes discoverability or cross-referencing easier or harder. That whole axis of concern is just fundamentally unrelated to the application monolith vs microservices question.


I'd say, that's the problem with that article.


That wouldn’t make sense, given that repo structure truly is unrelated in any way to the distinctions between monolith applications vs microservices.


But it is.

If you have multiple services in one repo it's much easier to work on each of them.


No, it is not easier. Working on them across multiple repos is just as easy. In either case you write dev tools to apply changes across units. If the units are repos, it adds no additional complexity.


There's another issue - sometimes people choose to migrate to microservices because the monolith "can't be scaled" or "is too slow" or whatever. Where often the case is just that the monolith was written poorly. But the team doesn't realise this and they and up copying the same poor design and technical processes that lead to the initial problems to the microservice refactor. Now they have the same problems with potentially higher complexity.

A problem I've seen on the project I'm working on is that the team seems to want to do conflate orthogonal issues from a design and technical perspective. "We're going to microservices because the monolith is slow and horrible. We're going to use CQRS to alleviate pressure on the repositories. We're going to use Event Sourcing because it works nicely with CQRS."

Question: "Why are we Event Sourcing this simple CRUD process?"

Answer: "Because we are moving from the Monolith to Microservices".


One issue with any form of modularization is that dependency cycles are not desirable and have the side effect of creating a need for ever more modules. The reason is that any dependency cycle can be trivially broken by adding another module.

You get dependency cycles when two dependent services need the same thing and you lack a good place to put that thing. You start with modules A and B. Then B need something that A has and then A needs something that B has. You can't do it without introducing a dependency cycle. So, you introduce C with the new thing. And A and B depend on C but B still also depends on A. And so on.

True weather you do Corba, COM, SOAP, Web RPC, OSGi, Gradle modules, etc. The only difference is the overhead of creating those modules is different and has varying levels of ceremony, management needs, etc. Also refactoring the module structure gets more complicated with some of these. And that's important because an organically grown architecture inevitably needs refactoring. And that tends to be a lot more tedious once you have micro services. And inevitably you will need to refactor. Unless you did waterfall perfectly and got the architecture and modularization right in one go. Hint: you won't.

Finally, the same kind of design principles you use for structuring your code (e.g. SOLID, keeping things cohesive, maximizing cohesiveness, Demeter's law, etc.) also applies to module design. Services with lots of outgoing dependencies are a problem. Services that do too much (low cohesiveness are a problem). Services that skip layers are a problem. The solutions are the same: refactor and change the design. Except that's harder with micro-services.

That's why Martin Fowler is right. Start with a monolith. Nothing wrong with those and should not stop you practicing good design. Using microservices actually makes it harder to do so. So, don't introduce microservices until you have to for a good technical or organizational reason (i.e. Conway's law can be a thing). But don't do it for the wrong reason of it being the hip thing to do.


How does a monolith solve dependency cycles, assuming you don’t put all your code in the same module?


It doesn't but it simplifies moving stuff around. I use an IDE and can rename at will. With microservice, it's rename. Commit, wait for the thing to build and deploy, open a different project fix all the names, etc. That requires a lot of coordination and is also risky if you get it wrong.

With a monolith, you change everything, create 1 commit. And after that passes CI/CD it can be live in minutes.


The main problem I have with the 'monolith first' advice is that it implies 'microservices later'. It feels like people have forgotten how to build good, simple, modular code.


Martin. The archicture astronaut. Finally returning to earth. Landing the same spot where he took off. But wiser ...


Keep in mind this article was written in 2015.


Martin has been a character in "enterprise architecture" for a long long time. He has been the inspiration for many convoluted systems. Actually I think he coined the term with his patterns of ... books.


As long as they are books to sell and conferences to present at, it's all cool and dandy.


I was wondering why I was agreeing with this article. It's the most un-Fowler thing I've read from him.


I was thinking the same thing. Funny how full circles come about in this industry sometimes isn't it ;)


If you imagine a monolith as a service that:

  takes a request 
  -> deserializes it/unpacks it to a function call
  -> sets up the context of the function call (is the user logged in etc)
  -> calls the business logic function with the appropriate context and request parameters
  -> eventually sends requests to downstream servers/data stores to manipulate state
  -> handles errors/success
  -> formats a response and returns it
The main problem I've seen in monoliths is that there is no separation/layering between unraveling a requests context, running the business logic, making the downstream requests, and generating a response.

Breaking things down into simplified conceptual components I think there is a: request, request_context, request_handler, business_logic, downstream_client, business_response, full_response

What is the correct behavior?

  return request_handler(request):
    request -> request_context;
    business_response = business_logic(request_context, request):
      downstream_client();
      downstream_client();
    business_response -> full_response;
    return full_response;
    
  business_response = request_handler(request_context, request):
    return business_logic(request_context, request):
      downstream_client();
      downstream_client();
  business_response -> full_response;
  return full_response;  

  request -> request_context;  
  business_response = request_handler(request_context, request, downstream_client):
    return business_logic(request_context, request, downstream_client):
      downstream_client();
      downstream_client();
  business_response -> full_response;
  return full_response;  
    
  something else?
In most monoliths you will see all forms of behavior and that is the primary problem with monoliths. Invoke any client anywhere. Determine the requests context anywhere. Put business logic anywhere. Format a response anywhere. Handle errors in 20 different ways in 20 different places. Determine the request is a 403 in business logic, rather than server logic? All of a sudden your business logic knows about your server implementation. Instantiate a client to talk to your database inside of your business logic? All of a sudden your business logic is probably manipulating server state (such as invoking threads, or invoking new metrics collection clients).

The point at which a particular request is handed off to the request specific business logic is the most important border in production.


> In most monoliths you will see all forms of behavior and that is the primary problem with monoliths. Invoke any client anywhere. Determine the requests context anywhere. Put business logic anywhere. Format a response anywhere. Handle errors in 20 different ways in 20 different places.

This just seems like a poorly (or not at all?) designed monolith, if there's no standard way of doing things, of concerns or responsibilities of various application layers? I mean I've been there too in organizations, but it just seems like we're skirting around the obvious: the system should've had a better architect (or team) in charge?


>This just seems like a poorly (or not at all?) designed monolith, if there's no standard way of doing things, of concerns or responsibilities of various application layers? I mean I've been there too in organizations, but it just seems like we're skirting around the obvious: the system should've had a better architect (or team) in charge?

First you need to have people who understand architecture. College does not meaningfully teach architecture. How many businesses are started with senior devs who know what they are doing? How many business are going to spend time on architecture while prototyping? When a prototype works, do you think they are going to spend resources fixing architecture or scaling/being first to market?

When a new employee joins, how many companies are going to inform the new employee on standard architecture practices for the company? After how many employees do you think it's impossible for 1 person to enforce architecture policy? Do you think people will even agree what best architecture is?

What about hiring new people? Is it important to hire another dev as fast as possible when you get money, or to have 1 dev fix the architecture? After all technical debt is cheaper to pay off (50% of engineering resources) with 2 devs than with 1 (100% of engineering resources), context switching is it's own expense...

Once you get into pragmatics you understand that good architecture is a common in the tragedy of the commons sense. It takes significant resource cost and investment for a very thankless job. So you must have authority make a commitment to architecture, who is almost always going to be making cost benefit analysis which is almost always going to favor 1 day from now to 1 year from now.


I guess it all boils down to building software with modular structure rather than mixing business logic all around. I personally think that it is easier to isolate business logic with microservice structure, but you can also make a huge mess. You can also make really good modular monolith, where business logic and state is where it belongs and not spread everywhere.


It is easier to isolate business logic with a microservice architecture, but as the microservice graph gets more complicated so too does administering it.

How do you make graphs that make sense of your various microservices? How do you make sure code doesn't get stale/all versions are compatible/rolling back code doesn't take out your website? How do you do capacity planning? How are service specific configurations done? How does service discovery work? How do you troubleshoot these systems? How do you document these systems? How do you onboard new people to these systems? What about setting up integration testing? Build systems? Repositories? Oncall? What happens when a single dev spins up a microservice and they quit? What happens when a new dev wants to create a new service in the latest greatest language? What happens when you want to share a piece of business logic of some sort? What about creating canaries for all these services? What about artifact management/versioning?

What about when your microservice becomes monolithic?

Complexity is complexity no matter where it exists. Microservices are an exchange of one complexity for another. Building modular software is a reduction of complexity. A microservice that has business logic mixed with server logic is still going to suffer from being brittle.


I definitely agree. I'm actually thinking that the microservices architecture punishes team earlier for creating a mess, which is a good thing. It forces you to adapt good patterns, because otherwise nothing will work.

I'm not sure, but I feel that with monoliths the teams get punished later and thus create a bigger mess. But I guess it's more like 60/40 rather than 99/1.


The answer to your questions is, hire competent SRE’s.


An SRE might help you build systems, but that is a lot of system to build depending on number of microservices. Managing the complexity of N services comes at a cost, maybe that cost is in how seamless and wonderful the testing infra is.

A very well built monolith is very easy to manage.

Most product devs want to trivially build a feature, not deal with the complexities of running a service in production. Abstracting away a request handler is going to be an easier overall system than abstracting services.

As for the oncall/onboarding etc, SRE is there to support and enable, not to be an ops monkey, so that stuff scales with number of services/engineers.


> It takes a lot, perhaps too much, discipline to build a monolith in a sufficiently modular way that it can be broken down into microservices easily.

have seen that happening. importing modules from all over the place to get a new feature out in the monolith. also, note this happened with rapid on boarding, the code-review culture was not strong enough to prevent this.

so the timing of the change from monolith to micro-service is critical, in order to get rid of it. otherwise, chances are high you got a monolith and microservices to take care of.


I like the "peeling off" strategy - starting with the monolith and adding microservices from the edges.

I've been working on something that tries to make starting with a monolith in React/Node.js relatively easy. Still don't have much of the "peeling" support but that is something we're looking to add: https://github.com/wasp-lang/wasp


I heard so many horror stories that some company tried to break up a large monolith to microservices and after X years of working on it, they give up. For example, in monolith you can use transactions and change millions of rows. If something goes wrong, the entire transaction will be rolled back. If you use microservices, you can only use transactions within a single service.


The main takeaway for me is clean modularity in a system with strong decoupling, the modules can be in a single application boundary (a monolith) or across multiple (services or microservices).

The design challenge becomes making sure that your modules can execute within the monolith or across services. The work to be done can be on a thread level or process level. The interfaces or contracts should be the same from the developers point of view. The execution of the work is delegated to a separate framework that can flip between thread and process models transparently without extra code.

This is how I approached my last microservices project. It could be built as one massive monolith or deployed as several microservices. You could could compose or decompose depending on how much resources where needed for the work.

I fail to understand why techniques around these approaches aren't talked about in detail. They have some degree of difficulty in implementation but are very achievable and the upsides are definitely worth it.


This definitely sounds like an interesting approach.

I think however defining these modules and the interfaces between them is the hard part. Part of this work is defining the bounded contexts and what should go where. If I understand DDD correctly this shouldn't be done by «tech» in isolation. It's something that's done by tech, business and design together. This is hard to do in the beginning – and I would argue that it should not be done in the beginning.

When starting out you've just got a set of hypothesis about the market, how the market wants to be adressed and in which way you can have any turnover doing it. This isn't the point when one should be defining detailed bounded contexts, but should instead just be experimenting with different ways to get market fit.

Edit: typo


There should be a name for it. If a name is established so people can talk about it easier, it would make a big difference. This is a kind of design pattern, although not an object-oriented pattern.


Could you elaborate on the framework you mentioned for flipping between process and thread models? That sounds interesting. Was it released or just used internally for some projects?


Like EJB that had remote interface, but you could also call the code directly without network in between?


A bit of a shameless plug, but really looking for feedback .. I've been experimenting with Inai [1][2], a framework that can help build modular microservice-like software with similar Dev team independence properties but can independently be built and deployed as a monolith or as separate services operationally. So far, I've had fun building an internal project at much higher speed than I've managed any similar project and had more fun doing it. I feel the idea (which is still nascent) has some merit, but would like to know what folks think.

[1] source - https://github.com/Imaginea/Inai

[2] blog post describing Inai - https://labs.imaginea.com/inai-rest-in-the-small/

PS: I've had some difficulty characterising Inai (as you can tell).


In my main side project what I've done is separate different conceptual components in clojure libraries that I then import into my main program (clojure enables you to seamlessly require a private github repo as a library).

This way you get most of the benefits of separating a codebase (such as testing different things, leaving code that barely works in the repo, pointing to older versions of part of the monolith, smaller repos, etc.) whilst integration between modules is a one-liner, so I don´t need microservices, servers, information transformation, etc.

There's even the possibility to dynamically add library requirements with add-lib, an experimental tools.deps functionality.


This is an excellent approach that I wish was more well known/common. We use this first before going to a complete service, which becomes easier anyway once you have a few libraries that had been in use and tested by then. Works great for our clojure code and other languages.


I always use monolith when building my servers. Only split microservices part when it is really required. Particular example:

I've built native C++ server that does some complex computations in some business domains.

When this monolith exposes generic JSON RPC based API where agents (human or software) can connect and do various tasks: admin, business rules design and configuration, client management, report design etc. etc. based on permission sets.

Now actual business client came. They have their own communication API and want integration with our server and of course being "customer is a king" they want communications tailored to their API. I was warned that this customer is one of many more that company is signing deals with. Not gonna put Acme specific code into main server.

This is where microservice part comes. Just wrote a small agent that mediates between client API and ours. Not much work at all. Come another client I can add more specific parts to this new microservice, or give it to other team to build their own based on first one as a template.

Physical scalability - I do not worry about. We re not Google and will never be due to nature of the business. Still main server can execute many 1000s of requests/s sustainably on my laptop (thank you C++). Put it on some real server grade hardware and forget about all that horizontal scalability. Big savings.


I, like many others, have been saying this for years. Too bad I didn't see this link before, it would have helped convincing a few people in the past. Now, fortunately, it doesn't seem that much of a heresy to say that monoliths (or SOA) is the right architecture a lot of the time (maybe most of the time).

I remember, a little more than 4 years ago I was brought on to save a product launch for a startup. They were, as far as I can remember, already creating the 2nd iteration/rewrite of their MVP with the first one never released and just few months short of doing their first release one of their developers started to convince everyone that the code base was a pile of crap and that they need to rebuild the whole thing is microservices. Because there is no other way around. The system was built in PHP and the otherwise smart and motivated developer wanted to rebuild it as a set of JS microservices.

He almost managed to convince the management, including the devlopment lead and the rest of the team. And it wasn't easy to convince him that that would have been a pretty dumb move and that it wouldn't have just taken a few more months, just because creating a 'user service' with a mongo backend was something he could pull of in his spare time.

Then I realized that there is something inherent in these kind of situations. Some people come around stating that a new piece of technology (or other knowledge) is better and then suddenly the rest of the world is somehow on the defense. Because they know something the rest don't so how could you prove them wrong? Not easy. And funny enough, this is actually a classic case of the shifting of the burden of proof fallacy.


I have worked on both types of applications and what he says is very true. The startup which failed to launch because of difficulties in getting the architecture and infrastructure right. It was around 2014 and the company company eventually ran out money and more importantly missed the time frame. The second product was a massive enterprise product that had 100s of members working on it on any given day. The product was written in Java as a monolith. It was slow to develop and a pain to work on. But it worked and the product succeeded. People complained that it was slow and they were right.

So they decided to break it up into multiple services. First was a pdf report generation service which I wrote in node.js. Then more and more services were added and other other modules were ported as node apps or separate java apps. In the end it was around 12 services and all of them worked well.

The monolith was still there but it's fast enough and users were happy. That's a lesson I'll never forget and have understood that time and velocity are far more important for a product to succeed!


If you design for testability and use proper dependency injection your "monolith" will already be "ready" for micro services.


From my experience the issue with monoliths is organizational and cultural first, then technical (maybe). Monoliths are usually managed by 1 team (IT for instance). This 1 team has never had to collaborate with other teams on building/configuring/improving the monolith. Centralizing things works well for stability but it is horrible for innovation. For instance, ERPs like SAP are monoliths, how do you inject, say, an AI algo to improve productivity on managing invoices into it? That team pushing for that AI is probably super close to accounting but probably far from the SAP IT team. The SAP IT team is incentivized on stability while the accounting one on productivity, how do you resolve the conflict? How do you work together and collaborate? I am sure microservices have many issues, but they make these conversations not just easier but necessary. I think this is the biggest advantage of microservices.


We are in the middle of the first production-scale deployment of a greenfield microservice-based application now and there are certainly a lot of pain points. On the flip side, I've been a part of about half a dozen extremely successful strangler migrations[0], some of which were downright easy even with a complicated business domain. I often wonder if we would have been better off deploying a monolith in half the time and immediately starting a strangler migration once the rate of business changes slows down. I've become more and more convinced over the past decade that Monolith First + Strangler Migration is most stable, safest way to deliver mid-sized software projects.

[0] https://microservices.io/patterns/refactoring/strangler-appl...


I've worked in two major microservices environments. The one was a new application where everything was sloppily wired together via REST APIs and the code itself was written in Scala. A massively weird conflict there where the devs wanted a challenge on a code level but nobody wanted to think about the bigger picture.

The other was an attempted rebuild of an existing .NET application to a Java microservices / service bus system. I think there was no reason for a rebuild and a thorough cleanup would have worked. If that one did not move to the microservices system, the people calling the shots would not have a leg to stand on because the new system would not be significantly better, and it would take years to reach feature and integration parity.


I recently got introduced to the Laravel framework in php. I think it's probably the best implementation of a monolith web framework right now. Php is very simple if you come from c/java and the framework provides you with so much functionality out of the box.


The question isn't microservices or monolith. The question is: What problem are we solving, and what's the most efficient way to solve it?

Requirements should dictate architecture. Data should dictate the model. Avoid unnecessary work. Be mindful of over-engineering.


There is no one size fits all solution here. In my experience it is sometimes good to start (conceptually) as a monolith and then divide the service into Microservices, eventually. Also depends on the maturity and experience of the team handling the services because micro services do like a different temperament both to develop and to manage.

I'm not from the camp that believes in creating services for every single thing - It's the same camp that believes in starting with distributed databases and then moving in the same direction. I believe in PostgreSQL everything and then moving to distributed only if the application demands ... Wait did I just start another war in here !


A lot of people are saying microservices are great if you do it right. What they miss is they require a whole bunch more people than a monolith to operate, they ignore the cost of adding those people. Really it's like comparing apples and oranges, sure if you have the resources and the headcount to have 37 and a half teams each owning only one piece of your app in a competent way go for that architecture, but if not then stop advocating for a unicorn that only large companies with big budgets can benefit from and preaching it to startups that are struggling to get off the ground as if that should be the industry norm.


I just wonder what kind of organization or project faces this kind of decision.

I've worked on workflow/crud-type web applications whose computational needs could be met using a single server with a couple of GB of RAM using a single database, indefinitely into the future. I don't see why it would occur to someone to split one of those into multiple services.

I've worked on systems for which many such servers were required in order to provide the expected throughput and, in such a case, writing a monolith is really not an option.

Is there a significant middle ground?


Thank you Martin. I have argued against micro services for writing systems (until you really need them) for a while and get real pushback.

For me, the issue is that a monolith based on, for example, Apache Tomcat using background work threads let’s me also handle background tasks in one JVM. I have not used this pattern in a long time, but when I did it served me well.

I think Java and the JVM are more suitable for this than Python and a web framework. I tried this approach with JRuby one time using background worker threads but I wouldn’t use CRuby.


Now that Fowler himself wrote about it I hope that the masses follow. Next I hope that the trend of going vertical-first comes back. Mr Fowler, could you please write an article on vertical-first scaling?


Lack of engineering talent + accessibility of the cloud + buzz words got us here. Looking forward to the end of this cycle.

This is somewhat reminiscent of the abuse of higher level languages and the mindset computers are so powerful there's no need to think too hard. However, the consequences are no longer limited to a slow and buggy program but many slow and buggy programs and a large AWS bill too!


The article is from 2015, so doesn't look like this had much impact!


Micro services is a networked/cloud buzzword for The Unix Philosophy. Doing “one job and doing it well” implies, though, that you know what that job is, what the right tool is, and that you know how to do it well. Another word for “monolith” could easily be “prototype” for the sake of this article. If only we all had the time and money to do real software engineering, amirite? But, time and time again I’ve been proven out that “the hack becomes the solution” and that we’re not going to go back and “fix” working code.


Your first and biggest problem when you start a technology startup is lack of customers and revenue. Do everything that gets you to the first customers and first revenue, and do everything that helps you find a repeatable & mechanical distribution strategy to reach more customers and more revenue.

Then and only then should you even consider paying back your technical debt. Monolith first always until you have the cash to do otherwise, and even then only if your app desperately needs it.


Microservices are hard. Monoliths are hard too. Focus on the product and the customer, build what they need. Architecture is a means to make a successful product.


Agreed and I'll add that the architecture must evolve into its most efficient form in tandem with the success of the product.


I'm wondering if "microservices" hasn't lost it's initial meaning. To me, an application composed of the "core", a message queue and a database server already qualifies as "microservices". I know which parts are "delicate" and which parts are "kill-at-will". That is what I care about at a very high level, not having a separate codebase for every single HTTP route.


The fact that we talk about it raises some questions. Surely it depends on the problem at hand. There are multiple companies solving various issues and those are either global or local, well defined or ambiguous. There is no one approach as it depends on where you stand at the moment as well. Another problem is developers themselves as they are incentivized to push for "over-engineering" as it'll make their CVs look better.


I question the relevance of this with cloud based solutions such as AWS Lambda, where you can spin up a production ready auto-scaled service very quickly; it certainly reduces the cost of ownership and operations. Sure if you are a team of 20 developers working on an app - go monolith.

But if you are a 300 person organization launching something from the ground up, I would choose many serverless solutions over a single monolith.


> But even experienced architects working in familiar domains have great difficulty getting boundaries right at the beginning. By building a monolith first, you can figure out what the right boundaries are, before a microservices design brushes a layer of treacle over them.

I think this is one of the most important points. Often it takes time to figure out what the right boundaries are. Very rarely do you get it right a priori.


"Monolith vs microservices" is a bit like "Fullsize SUV vs multiple sedans". They are used differently, so you should pick that which fits your purposes. And the idea that you have to do one or the other is a bit ridiculous. I've seen plenty of businesses that organically landed on a mix of monolith and microservice and things in between. Don't get caught up in formalism.


I wonder what would the trend be without the push for the huge megacorps that couldn't do without microservices (MAGA mostly) that decided to start selling their cloud services.

Would they have kept them for themselves (including k8s), would microservices be just as fashionable? I'd really like to know the answer, while watching my friends startup of 4 people running they 14 microservices on a k8s cluster.


A microservice architecture should mainly be used to solve runtime issues, such as to improve scalability or fault tolerance. Also, it can be useful in order to adopt new languages or frameworks.

It is not something that should be used simply to clean up your code. You can refactor your code in any way you see fit within a monolith. Adding a network later in between your modules does not make this task simpler.


I tend to take a “hybrid” approach. I believe in modular architectures, as I think they result in much better quality, but the design needs to be a “whole system” design, with the layers/modules derived from the aggregate.

Once that is done, I work on each module as a standalone project, with its own lifecycle.

I’m also careful not to break things up “too much.” Not everything needs to be a standalone module.


If we were to ignore the mechanism part of microservices, we could say that qmail and postfix have a microservices architecture. Both of them have fared much better than monilithic Sendmail. And, their track records for resilience and reliability are very encouraging too.

There exist other ways of designing 'microservices' that are not necessarily conventional monoliths!


Surely a radical viewpoint against the (current) state of the art in software architecture?

Why swim against a tide of industry “best practice” that says ...

... Lets make our application into many communicating distributed applications. Where the boundaries lie is unclear, but everyone says this is the way to produce an application (I mean applications), so this must be the way to go.


This is the never ending cycle of "too much order" - "too much chaos". It takes a lot of experience to be able to judge how much chaos you want. That is all.. experience. I don't think any theoretical theory can tell you how much or how little is right


Monoliths are even easier to manage in 2021 because of workspace dependency management (e.g. yarn workspaces, cargo workspaces) etc. You can have your cake (microservices built as separate packages) and eat it too (a monorepo workspace w/ all your code).


A monolith designed using Component Based architecture where each of the components has a well defined service boundary, can easily be split up into a microservices architecture. Each of the components from the monolith making up a subsequent microservice.


(2015)

I am very interested in what he could add now.


I don't understand why people think that microservices are monoliths are 2 only options.

https://eng.uber.com/microservice-architecture/


I take this same philosophy at all levels of my code. It's like the big bang: start out with a dense hot lump of code and as it grows and cools off things break apart into subunits and become more organized, documented, standalone, etc.


Modular First.

You start out as technically a monolith, but that is prepared at all times to be decomposed into services, if and when the need arises.

It's nothing too fancy - can be simply another name for Hexagonal Architecture, functional-core-imperative-shell, etc.


it's surprising to be _so_ disagree.

The value of microservices is to isolate risks and have some manageable entity to reason about, to have an understandable scope of lifecycle, test coverage and relations with other services. The larger the piece, the harder it is to do.

Splitting up is normally harder than building up, and often impossible to agree on. Any monolyth I worked on was coupled more than necessary because of regular developer dynamics to use all code available in classpath. I've never even saw a successful monolyth split - you just usually rewrite from scratch, lift-n-shifting pieces here and there.


Monolith First.

The point of this idea isn't that it says "monolith"; it's that it includes time as a factor. Too much of our discussion focuses on one state or another, and not on the evolution between states.


> The second issue with starting with microservices is that they only work well if you come up with good, stable boundaries between the services - which is essentially the task of drawing up the right set of BoundedContexts.

> The logical way is to design a monolith carefully, paying attention to modularity within the software, both at the API boundaries and how the data is stored. Do this well, and it's a relatively simple matter to make the shift to microservices.

This is the big take-away to me -- for a long while now I've seen this whole monoliths-vs-microservices debate as a red herring. Whether your conduit is functions in shared virtual address space, HTTP or a Kafka topic the real problem is designing the right contracts, protocols and interface boundaries. There's obviously an operational difference in latencies and deployment difficulty (i.e. deploying 5 separate apps versus 1) but deep down the architectural bits don't get any easier to do correctly, they just get easier to cover up (and don't sink your deployment/performance/etc) when you do them badly when there's less latency.

What we're witnessing is a dearth of architectural skill -- which isn't completely unreasonable because 99% of developers (no matter the inflated title you carry) are not geniuses. We have collectively stumbled upon decent guidelines/theories (single responsibility, etc), but just don't have the skill and/or discipline to make them work. This is like discovering how to build the next generation of bridge, but not being able to manage the resulting complexity.

I think the only way I can make the point stick is by building a bundle of libraries (ok, you could call it a framework) that takes away this distinction -- the perfect DDD library that just takes your logic (think free monads/effect systems in the haskell sense) and gives you everything else, including a way to bundle services together into the same "monolith" at deployment time. The biggest problem is that the only language I can think of which has the expressive power to pull this off and build bullet-proof codebases is Haskell. It'd be a hell of a yak shave and more recently I've committed to not spending all my time shaving yaks.

Another hot take if you liked the one above -- Domain Driven Design is the most useful design for modern software development. The gang of 4 book can be reduced to roughly a few pages of patterns (which you would have come across yourself, though you can argue not everyone would), and outside of that is basically a porn mag for Enterprise Java (tm) developers.



Using Haskell style IO types - couldn't we lift and shift anything at build time between a network call and a function call? Monolith that could shard at any function boundary into microservices.


If starting from scratch, lets say a startup, it's still true that the vast majority of all software projects fail. Therefore optimising things prematurely is not a good use of time or money.


> I feel that you shouldn't start with microservices unless you have reasonable experience of building a microservices system in the team

Well, yeah... obviously. That's not the same thing as it being bad to start with microservices generally.

I just don't agree with this at all. The designer of the architecture clearly does need to know how to design service-based architectures and needs to have a very strong understanding of the business domain to draw reasonable initial boundaries. These are not unusual traits when starting a company as a technical founder.


What if those boundaries change because an initial assumption turns out to be wrong?

In a monolith, no biggy. With microservices, huge pain.

Moving to microservices is something you do to optimise scability, but it comes with costs. A major cost is the reduction in flexibility. Starting a new not-very-well-understood thing needs massive flexibility.


I think we've found in many situations that the microservice boundaries make it easier to change things. We were pretty careful in making fairly conservative choices initially though, and split some of the bigger systems up more over time.


> These are not unusual traits when starting a company as a technical founder.

Its actually not unusual to have technical cofounders who have no prior software engineering work experience.


I see two categories of technical founders.

1. The people who have many years of experience building a highly specialized application using domain knowledge gained at their previous job.

2. The new CS grad right out of college who had big dreams, a lot of time, and high risk tolerance. Unclear as to whether they're starting up the company because they failed to get a job elsewhere or because they think they're about to build the next Facebook.


I'm definitely not in either of those camps, more mid-senior level experience with highly specialised domain knowledge in a completely unrelated field (neuro/QEEG feedback training before, now fintech/insurance)


Conway's Law is king


The big curse of microservices is the word "Micro". It should have just the right size, be it big or small.


Brilliant article Martin and brilliant discussion Hacker News. This is why this forum is still the best.


I worked on a project that was a monolith. A team was placed to rewrite it to microservices and shortly after I quit that job. About a year after that I ate lunch with a ex-colleague that told me they were still working on that rewrite.

Looking at the page now it looks like the monolith is still running as far as I can see, about 5 years later. I guess they gave up. :)


Was the rewrite to microservices really the problem here? I’ve worked on a few such projects too, where we decided to rewrite the application, but it never ended up replacing the old existing one. Technology was never the issue in all these.


Most likely not, the new guys were phd-types that was ultra smart but did nothing except have meetings. I think they were over-engineering it completely which made it impossible to deliver something of value.

However, I think that is often the case of a microservices architecture.


> Most likely not, the new guys were phd-types that was ultra smart but did nothing except have meetings.

There's your problem, regardless of architecture, it would most likely have been over-engineered anyway.


hah, I made a video on the same vibe. https://www.youtube.com/watch?v=clagrT5BC7g

Martin is definitely ahead of the curve, or perhaps I'm behind.


Microservices primarily solve a team scaling problem. Then comes technical.


The question is from what team size on microservices make things easier. I bet the number is much bigger than what people think.


We started with flask(https://flask.palletsprojects.com/en/1.1.x/) and never looked back. It enables pure, iterative, incremental development.


Article is from 2015. Should this be added to the title?


To make microservies you just first build a monolith


He can write all the posts about software engineering practices he wants his server is still down right now :)


Gall's law.


my TL;DR of the entire article:

> Although the evidence is sparse, I feel that you shouldn't start with microservices unless you have *reasonable experience of building a microservices system* in the team.


[2015]


Architecting systems is a dance around tradeoffs: What weight do you you assign to each of the 'ilities'. Some products/services are inherently complex. For those, complexity can not be destroyed.Just transformed from one form to another. I agree that there in many cases people jump on the micro services architecture too soon, without first developing a mature understanding of the domains involved in the application as well as data flow and state transitions. Some years ago I was involved in re-architecting a monolith. This was a team that cherished testing, had modules and layers in the monolith , swore by DRY and so on. There were a few problems: * Adding features was an expensive exercise. For all the layers in place, the abstractions and interfaces were evolved over 5 years and not necessarily with enough design upfront. * Performance was poor: The monolithic architecture was not necessarily to blame here, but rather using a document oriented data store and doing all the marshalling/unmarshalling in the application code. The reasoning was that 'we can store anything. we can interpret it any way we like'. In practice, the code was re-inventing what relational databases were designed for.

I proposed and designed a micro services architecture with the team. I had done that sort of thing a few times even before they were called micro services. There were a few advantages: * The product team had the freedom to re-think the value proposition, the packaging, the features. It was critical for the business in order to remain competitive. * The development team could organize in small sub teams with natural affinity to the subdomains they had more experience/expertise in. * Each domain could progress at the pace that fits, with versioned APIs ensuring compatibility. Not necessarily a unique micro service success prerequisite. One can argue versioning APIs is a good best practices even for internal APIs but the reality is that versioning internal APIs is often less prioritized or addressed to begin with.

There are technical pros and cons for monolith/MS. Additional data that can augment this decision is the org structure , the teams dynamic and the skillsets available. In the case of that project, the team had excellent discipline around testing and CI/CD. Of course there are challenges. Integration testing becomes de-facto impossible locally. Debugging is not super difficult with the right logging, but still harder. One challenge I saw with that project and other projects that adopt microservices is that the way of thinking switch to 'ok so this should be a new service'. I think this is a dangerous mindset, because it trivializes the overhead of what introducing a new service means. I have developed some patterns that I call hybrid architecture patterns, and they have served me well.

One thing to consider when deciding what road to take, is how does the presentation layer interact with the micro services. When it is possible to map the presentation to a domain or a small subset of the domains, the micro services approach suffers less from the cons. A monolithic presentation ---> Micro services backend could severely reduce benefits.

2 good references here: 'Pattern oriented software architectures' 'Evaluating software architecture'


......


Thank you, Martin.


Monoliths are solution in search of problems. Micro services are solution in search of problems.

Generally, focus on solving the problems first.

Can the problem be solved by some static rendering of data? Monolith is better solution.

Does the problem require constant data updates from many sources and dynamic rendering of data changes? Micro services and reactive architecture are better solutions.

There’s no one size fits all solution. The better software engineers recognizes the pros and cons of many architecture patterns and mix match portions that make sense for the solution, given schedule, engineering resources, and budget.


I don't understand what you mean by "static rendering" vs "dynamic rendering", but I suspect I wouldn't agree on that.

Monolith vs microservice IMO depends more on team size/structure, stage of the project (POC/beta/stable/etc.), how well defined the specs are, … Rather than the nature of the problem itself.


Was hoping for something 2001-related.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: