Hacker News new | past | comments | ask | show | jobs | submit login
Well-Architected Monoliths Are Okay (robertnorthard.com)
338 points by kiyanwang on Oct 7, 2018 | hide | past | favorite | 190 comments



A recurring pattern that I have noticed: 1) a company the size of Amazon, Facebook, or Google runs into a problem that comes from their gigantic scale, or the fact that they're trying to do something very different (e.g. Google Maps compared to old-style web pages) 2) Big Company comes up with a new technology to solve their problem 3) Many, many small companies that will never become big enough to need this, and are doing far less exotic things that require it, will try to make their ordinary web page into a microservice single page app with a NoSQL backend, because that's what successful companies do.

Imitating success often makes sense; after all, we don't want to imitate failure. But your company is almost certainly not facing issues at all like Amazon, Facebook, or Google, and also does not have nearly as many programmers to do it.


+100 votes to this. My last job didn’t need to go Microservices and Kubernetes. But we did from the beginning. And the overhead of getting the greenfield project off the ground was immense. I kept thinking a aimple albeit inelegant monolith would have helped us prove the concept better and that we could have used it to then refactor once operational. If I take nothing else from my time at that startup I’ll tske this: sometimes you don’t need all the fancy tech that is hyped from the incumbent platforms with a vested interest in getting you onto their platform, sometimes it’s best to role out that tried and true Ruby on Rails or LAMP stack to focus on the product.


I'm more annoyed by the extremes. Services are awesome (not using the SOA term...) but how about a monolith for most stuff and splitting out a few key things? I guess it's so boring that no one writes about it...


Kubernetes sure makes deployments easy though.


I call this: Headline Driven Development

Another good example are monorepos. A decision maker sees a headline about Google and Facebook using monorepos and mandates his company to switch. Unfortunately, he or she didn't read or didn't understand the actual article that explains that it only works because of strict code reviews and extensive integration tests.


We call it Hype Driven Development which is just as bad as Marketing Driven Development... and yes, monorepos, don't even mention it, good heavens.

Another hype that keeps coming back is the magic of trunk based development (with a random po reading some flashy article about feature switches as the wonderous solution for a-b testing AND faster development).

Nowdays I even consider putting react and angular into this pack, since, you know, "if it's good enough for facebook/google then it must be really good" - anyone ever tried to increase performance for a react site (and realizing that precompiling templates was not exactly a bad idea years ago) or hitting the wall with inline async helpers and rx magic might know my voes. But then again, give me a fast and reliable server rendered site over an unsteable trainwreck spa anytime and I will be happy.


>anyone ever tried to increase performance for a react site (and realizing that precompiling templates was not exactly a bad idea years ago)

I'm not sure I'm parsing this sentence correctly. Are you saying that precompiling templates and rehydrating them doesn't cut it anymore? If so, why not? I haven't used React much, but I've done some work in a framework with similar principles and I felt like proper precompiling, with basic HTML fallbacks cooked in where possible, provided all the performance of server rendered sites with the added bells and whistles of SPAs (including that most subsequent requests can load quicker than a full rerender, provided JS is present).


Afaik the render part is purely dynamic and the advocated style of creating closures with fat arrows to avoid scope binding (which is essentially the same if done in the render section) is quite expensive in terms of performance.

React and fiber tries to be smart about rendering with tree section diffing, but unless you use immutable, it's "not enough" to rely on - without immutable even in a smaller redux app there is a good chance that you have unintentional rerenderings triggered, which while may not create new tree branch renderings, still need to be evaluated.

This of course applies to the client, I don't have experience with nextjs or similar tools.


The issue with closures is easily solved, just declare it as class property (or a const variable outside the component if you're using SFC) and then use it instead of declaring it inside your render JSX. Any React-aware linter would not let you declare closures in JSX.


Can you recommend a tsx-friendly linter and its setting? The sites I worked with never followed this rule and whenever I tried to bring it up, it was too late to refactor things (at least from a roadmap/burndown po/sm perspective)


I use tslint and tslint-react, the rule is named "jsx-no-lambda"


> Another good example are monorepos. A decision maker sees a headline about Google and Facebook using monorepos and mandates his company to switch.

Unfortunately he didn't know that `git` does not scale well with the size of the repo and Google have built their custom repository system to tackle this.


Google used Perforce, which has a somewhat diferent model from git and scales a lot better. A monorepo is also the obvious way to use it, as you can’t (I don’t think?) branch between depots.


It seems architects often do it to so they can put that technology on their resume to find a higher-paying job. Your org is their school, and you are stuck with their convoluted buzzword-filled "training" stack when they hop away for a bigger company or project using their fluffed up resume.


Thing is, Google made the monorepo decision back when they were still small.


I feel similar sometimes and my next thought is usually, “what part of the system is breaking down such that engineers at small companies don’t understand the scope of their projects?”. Is it a lack of experienced/authoritative technical leadership? Is it bad blood from the rails monolith days? Is it a symptom of a high density of top-tier minds mob-engineering solutions? Are engineers just lazy and prefer the industry status quo framework and some config scripts to the work of actually building software?... Is it even a problem that can be quantified (small company x could be here instead of here if they hadn’t spent resources following the giants)?


Yes it's a lack of good technical leadership.

I think labor market incentives distort a lot of engineering decision-making especially at smaller companies, where the boss heard that React is hot and decides Solution X should be built with React.

Developers eager to develop their skills, increase their rates and advance in seniority end up focusing on the new hotness because of this.

Engineers who want to increase their impact and scope should focus first and foremost on the skill of making smart tradeoffs between technical realities and the requirements of other business functions--aside from simply writing code, this is what engineering is indispensable for.

Beyond that it pays to go deep in your mastery of a particular platform, the dependencies underlying it, and related technologies--all of these enhance your ability to design solutions.

Unfortunately the reality is there are very real financial incentives to keep jumping to the new hot thing every year.


> Engineers who want to increase their impact and scope should focus first and foremost on the skill of making smart tradeoffs between technical realities and the requirements of other business functions--aside from simply writing code, this is what engineering is indispensable for.

I have recently started contracting in London, and this is 100% what I have found in my younger colleagues. Trying to do everything "the right way" without keeping in mind at all the business constraints has created a total reciprocal distrust with upper management - which in turn isn't at all able to explain how being able to do that makes you a much better developer for the real world, not for some fantasy world where time and resources are infinite.


Some additional root-causes:

* Ex-engineer managers pointing out that technology, then either brown-nosing or incompetent engineers running with it

* Tech debt: throw-away prototypes / experiments forced to production products

* Engineers padding their technical resume (probably intending to transfer before the debt is due)

* Engineers reusing existing knowledge instead of learning

* Premature engineering, small teams building for a big (hypothetical) future instead of building everything that comes in between

* Consultants / Contractors

* Low hiring bar

And plenty more. It's usually a systemic failure, with multiple parties failing to do what's best for the the customers and the business.


> * Engineers reusing existing knowledge instead of learning

Usually I see the opposite. Engineers jumping on hot technology of the day (and making a half arsed job of it) rather than getting to know any one technology well.


Indeed! The good systems I see are the result of tuning and adjusting over time based on domain experience and experience with the stack. Good stacks take elbow grease to tune for a given environment: incremental factoring and cleaning based on actual experience with it. Development efficiency and productive reuse does NOT come in a box. Barring a huge AI breakthrough, the latest buzzword won't do your hard thinking and analyzing for you.


> Are engineers just lazy and prefer the industry status quo framework and some config scripts to the work of actually building software?

It's worse than lazy, it's the new way of doing engineering that is glueing together ready made components.


Isn't that the method of every mature branch of engineering? Few projects involve inventing new tools, materials or processes in civil or automative engineering, for example.


I should have added that I don't enjoy fill-in-the-blanks kind of jobs.

I agree with the sibling comment of 'dcow'.

I will add a) innovation is a slow process and b) that a lot of projects take the _fast path_ of "fill-in-the-blanks" tools instead of thinking about a good solution.

To avoid speaking in vaccum, I will only cite 'Ansible' which is a tiny improvement over the previous hype. Obviously, if you consider the whole 'Ansible + Galaxy' there is value but still the wrong solution. All arguments I've seen for Ansible seem wrong to me.

AFAICT that time would have been better spent in functional devops approaches. My favorite being 'guix'. Also, I understand that Ansible galaxy was built by thousands of man hours by hundreds of contributors where a functional approach will require prolly more time and more focused effort.


My point here was not that people shouldn’t reuse good solutions and avoid redundant work if possible. It was that if this approach is “overly” desirable, then we will naturally see solutions deployed that are not 100% tailored to their application. Writing and designing good business is _work_.


It fashion driven development.

Many cannot get to the grips that majority of work is just plumbing, yet another CRUD app or just plain maintenance.

So it is cooler to keep pushing for things like those companies, even if there is no need, after all the cv needs to match the new buzzword soup filter from HR.


This doesn’t only apply to technology. It applies to hiring and other organizational processes as well.


Rings true. I once worked a contract at a place with proprietary hardware, that wanted their software extras to be purchased separately, and accessible through an "app store". This seemed to imply they thought somebody else would develop apps for their store, which was odd because if their proprietary hardware (which was quite expensive, justifiably so) sold 1000 units it would be considered successful. Basically nobody made apps for their app store except the people who worked for the same company that made the hardware, and I had a hard time understanding why they didn't use the (much simpler) model of "if you pay us $X more we'll throw in this bit of software". Instead, they had to develop all of the software for running and maintaining an app store, including the issues of making sure that apps uploaded to their store were not malware.

Then, I realized what was going on: Apple had introduced their app store a few years before, and it was a big deal. Steve Jobs was the CEO on the cover of every magazine. The CEO looked at Apple, and said, "we must have an app store", and that was all there was to it, even though Apple sold several orders of magnitude more units, making the concept of an app store way more suitable for them than for the company I was working for.


e.g. "the spotify model"


> also does not have nearly as many programmers to do it

I tried to make this point at the last place because they were insisting that because Amazon et al used microservices, they were obviously the right way. They were planning on 19 microservices to replace the monolith with a dev team of ... 3 perms and 2 contractors.


> 19 microservices ... with a dev team of ... 3 perms and 2 contractors.

I thought the main benefit of a microservice architecture is to break up an application into smaller services that are about the size that a small development team can build and maintain on their own. I'm not experienced in this area, but more than one microservice per developer seems like a major "architecture smell."


That's not the main benefit. The reason Google et al split services up into independent servers is for physical reasons. They have large datasets that don't fit on a single machine, so they have to shard the servers that handle it, and they have want to maximise their use of memory and CPU, so running unrelated tasks on the cores is a bad idea - at which point why link in the code? Before you know it you've got "microservices" except they aren't micro, they're machine sized.

Somehow this got mangled into "microservices are good because they help teams work together" which isn't what it's about at all. At least at Google the smallest hello world server was something like 50mb when I worked there, just because of the huge dependency graph. It was common for servers to pull in hundreds of modules maintained by nearly as many different teams, all in a single binary. Each server was essentially a "monolith" except for the stuff that had to run in separate backends because of resource constraints.


> more than one microservice per developer seems like a major "architecture smell."

Yeah, it's normally that you need N developers per microservice (I've seen N=5 recommended, for example.)


The prefix micro in Microservice is very misleading. It is a microservice in the context of a >1,000KLOC monolith. I do like N=5 guideline as a way to make you think twice before making too many little services.


> I'm not experienced in this area, but more than one microservice per developer seems like a major "architecture smell."

It's not a smell; it's a hot, steaming turd.


Exactly my feeling. Also, when I hire people coming from the startup industry I have to explain why we don't use kubernetes (we manage 2 servers for our production) or redux (good old publish-and-subscribe works great for us). Some of them just don't get it. Others feel relieved.


I agree, it does sometimes feel that everyone is taking crazy pills. Why did everyone start caring about mutability all of a sudden?

I really like MobX ATM. It's a lot easier to explain to new people as well. Redux is like... too much...


People started caring about mutability out of two reasons I would say:

- It’s a source of thread-safety bugs. With multiple cores even in phones and async programming rising this has become a bigger issue.

- It’s a source of general program logic bugs.

I wouldn’t say that it has come all of a sudden. Like other modern techniques it has been standard in functional programming for years. In mainstream programming languages it’s also not totally new, for instance immutable strings were a design decision in java.


You are so right for the thread-safety, immutability is a major improvements. However, I am pretty sure OP was referring to immutability in javascript, which is single-thread by design, and yet many javascript developers use immutability like a no-brainer and is seen as best-practice to the point where people don't even remember what it solves. Just that it must be used.


Ok, fair point - but redux is great even if your app is small.


Exactly that. Our app is >60kloc of typescript/react, we target android,ios,macos,win32 sharing >85% of the same source code with react native and react web, many abstractions, a lot of local storage and sync with remote server, and so on. Not a small app. Still, we don't use redux, just good ol' publish and subcribe with appropriate separation of concern. My point is there are many ways to manage an app state, redux is one of them, and people can't get to understand how there are many other possible software architectures that provide maintability and scalabity of the source code, and yet does not involve what is seen as the all-incompassing pattern/tool called flux/redux.


Well this is quite bit of generalisation. My business eradicated servers and containers and replaced them with AWS Lambda Functions composed in microservices architecture backed by DynamoDB tables. The result was an order of magnitude reduction in operational complexity, increased stability and reduced cost. This is not fad. This is real.


Lucky you.

There are very few established businesses which can 100% squeeze themselves into the constrains of serverless architecture.


> also does not have nearly as many programmers to do it.

Nor the underlying infrastructure, such as, for example, physical server hardware that's a fraction of the cost of cloud.


Kubernetes comes to mind.


Thank you for saying this. Its a personal battle preventing me from working on my own startup trying to imitate the likes of Google, Amazon.


I hear this quite often but the case is that if you are looking for financing the main technical concern of the investors is how (and if) your application does scale - talking about SaaS. If a SaaS gets popular the new user load can be quite heavy and in the past there were a lot of story's about companies that needed to rebuild there system because of this (I think pipedrive was the last example I heard of). Although it makes sense to ignore scaling at the beginning you need to design your application in a way that you can easily transition to something that can scale.


> the main technical concern of the investors is how (and if) your application does scale

How often is this borne of ignorance, though?

> in the past there were a lot of story's about companies that needed to rebuild there system

More specifically, are investors merely remembering the struggles of the dot-com boom, when transistor density was one thousandth that of today?

Internet usage (number of users and number of services each one uses) has grown, too, but I haven't seen statistics that suggest this would be more than 100x of 20 years ago.

> Although it makes sense to ignore scaling at the beginning you need to design your application in a way that you can easily transition to something that can scale.

This presents a false dichotomy. One doesn't have to ignore scaling at the beginning to make the decision to trade initial time-to-market for eventual "horizontal" scalability (usually what is meant by "can scale" [1]), but that may be what happens in practice.

[1] even though one can buy a remarkably large single server 3-6x price premiums over mid-range single servers, as well as use other, more traditional techniques, before "needing" to rebuild the entire system.


The investors I know and have to work with are mostly no technicians. They scan the news for investments and reported problems in growing business. If you can convince them that you thought about scaling issues and know how to solve them then they come up you should be fine :)


That belies the previous assertion that they're concerned with the "how" of scaling. What you're desribing is merely the investor (usually VC, which has been discussed here before) behavior of demanding that founders have a confident answer to potential challenges, even if it's bluster.

That's a situation fairly far removed from a VC refusing to invest because the company isn't imitating a FAANG's architecture.


I've seen this so many times unfortunately. Additionally, you don't need a big data architecture when you get 20GB of data/year.


> you don't need a big data architecture when you get 20GB of data/year

Probably no even if it's 50x that at 1TB/year, unless it's in very small chunks and/or requires an outsized amount of (possibly realtime) processing.

Current commodity servers will scale up to 12TiB of main memory, and if you're willing to use previous generation CPUs, those servers can twice the RAM (24TiB). Under half a megabuck without storage.


Wet Streets Make Rain(tm).

Not the same thing, but the psychology is related.

Ref: Michael Crighton; Gell-Mann


Adding to 1): more often than not, the "scaling" issues are 95% social and 5% technical.


Understatement of the month. Last I heard, the HN consensus was that it's better to delay microservices (personally I'd say wait until profitable or >100 engineers).

Even Twitter, probably the most successful performant microservice-at-scale company advocates waiting as long as absolutely possible to make microservices. They said "It fixes one problem, and makes every other piece of application development significantly harder."


You're absolutely right. Amazon was already a huge successful company when they began their microservices journey. Mr. Bezos is the only chief executive I know of who has knowingly applied Conway's Law[1] to corporate governance. Amazon's conversion to services is best understood as an hybrid managerial/engineering decision. This means that the microservices approach only makes sense as part of a larger organizational strategy. A small organization is almost always better off building a monolith, because while it's less efficient at scale, if the organization is successful enough for that to be a problem, it's also successful enough to be able to afford the rearchitecting, assuming competent leadership.

[1]https://en.wikipedia.org/wiki/Conway%27s_law


Amazon is using SOA, not micro-service as it was understood today.


The need for microservices has little to do with company size, they have to be the right tool for the job. People mistakenly use them for scalability, or to partition work between teams.

I'm in a small engineering org, and I split out a bunch of security critical code from an unsecure monolith and moved that into a microservice, running in a very tightly controlled and audited environment. A tiny part of my runtime needed different security from the rest of it, and a microservice was the easiest way to accomplish it. Now, I've traded a security problem for a latency problem, as everything that used to be handled internally is now an RPC - though this is an easy problem to solve.


Ha! A close cousin to what I call "Resume Driven Development". Most engineers don't want to add to or maintain something someone else built prior. Since most people are working on and inheriting monoliths at small to midsize companies, the "micro services are what successes like Google do" era has been a real boon to RDD because you've got this great ready made justification for a rebuild. It would be a much harder sell to rebuild a better 2.0 of your monolith. In most cases, you don't need to do either for business value. But we pretend it is needed so that we can get a chance to do something blue sky for a change. I'm pretty conscious of when I'm RDDing but I've seen a lot of folks really delude themselves into thinking there is an urgent need to rebuild. It's very amusing watching RDD theater play out.


I think JWT are supposed to help here, allowing other services to self validate signed credentials instead of always reaching out to the external service.


There's quite a difference between splitting out a few components for a specific purpose, and going full-on "microservice architecture" IMHO


Scalability is what microservices give you, at the expense of nearly everything else.


Independant deployment

Multiple languages

A/B testing at deployment fir small services

Less dependency to software specific stack

Security compartmentalizations

Strict programming by interface

Failure isolation


Independent deployment: Rarely needed. Multiple languages: No shared infrastructure. A/B testing at deployment for small services: Feature flags. Less dependency to software specific stack: More dependency to homegrown immature solutions. Security compartmentalizations: Isolated half-assed solutions with few people responsible. Strict programming by interface: Not unique to microservices, some people call this "static typing". Failure isolation: No shared infrastructure, blindsides, definitely not unique to microservices.


Independent deployment: having deployed hundreds of microservices, I can tell you it is a godsend. Quickly iterating on a specific microservice without running risk of down-time is useful right from the start. We would sometimes deploy the same microservice multiple times a day to improve performance or fix a critical bug. Apart from scaling, this is my number one reason to stay with microservices.

Multiple languages: the shared infra in this case is your Kubernetes yaml files and Docker build files, both of which can be shared easily. The rest is either RESTful or Kafka consume/produce. Python/R/Scala/Java/C#/Haskell/OCaml can all interface with that.

Feature flags are a possible solution. I worked with them in the past and, applied correctly, they can offer a similar experience. However, one often needs more than just a boolean filter: you need to have the logic in your application to route the requests to, say, two different implementations of the same interface. Do this on the micro-service scale, and you get a nice SoC at the proxy level. It moves A/B testing to be part of the infra, not the application logic.

More dependency to homegrown immature solutions: care to elaborate?

Isolated half-assed solutions with few people responsible: care to elaborate? My little micro-service component needs the bare minimum of access, which I can precisely provide to it. No memory is shared, no storage is shared... Often it is much easier to prove that the system stays within confidentiality and availability limits. Networking allows me to transparently monitor all data flowing through my services.

Strict programming by interface vs. static typing. Oh yes, I totally agree with static typing! Such a big advantage over dynamic typing, read my other comments on HN. However, there is no static typing across versions of your same software. Forward compatibility is hard to achieve when all components need to be upgraded at the same time. I still dread the days when we would upgrade 'a major version' and all kinds of database tables had to be changed, causing chicken-and-eggs problems. Not saying that this problem is completely eliminated with a micro-service architecture, but it forces developers to think what can be reasonable isolated, causing a higher SoC. It also prevents the humongous unmaintainable 300+ tables large RDBMSs, which are often the primary cause of stagnated development.

Failure isolation: I don't understand your reasoning, sorry.


And the reason some developers don't understand micro-services is because they don't do any of those things.

They just stick to MVC and their preferred language.


I generally agree, but I'd make a bit of a distinction that doesn't mean making a pure monolith, you can still spinoff dedicated services where it makes sense without adopting a a full "microservices architecture".


Thinking about micro-services as a scaling solution is only ever looked at from the orchestration perspective for some reason. Most of the lessons there can be applied to scale your code as well so that you can spin parts of it off with ease. This also has the benefit of giving you a cleaner & more maintainable code-base so there is still a gain even if you never spin anything off, which could be argued was the problem in the first place.

This is perhaps the best bang for your buck until you actually know what the bottlenecks are as it otherwise becomes a brute-force approach to "scale everything individually". Unfortunately "our code is ready to scale" doesn't quite sound as cool as "we run a thousand micro-services".


I found this to be true with auth. Big monolith but I split off auth, which made using a single user authority much easier for all kinds of stuff that wanted nothing to do with the rest of the monolith.


There are definitely some services that can be self contained and if it is a legacy codebase, modernized in an isolated way.


And Domain Driven Design can help you understand which parts are the best candidates


I specifically agree. The 'where it makes sense' has to be based on something practical, not theoretical nor in-the-not-immediate-future. Currently in one of our products we have 4 back-end services: webapp, api, relay, and broadcast. The scaling requirements and lifetimes of the relay and broadcast instances differ. The relay also has audio requirements. It all depends on your environment and the teams ability to deal with what's required.


>personally I'd say wait until profitable or >100 engineers

Personally I'd say delay until you can find an area causing such a performance or scalability issue that it justifies its own repo(s), build pipelines / devops / ops, deployable artifacts, and team to hold the context on it.

Without that, your profitability or engineering team is best invested into extending your offering to better service your customers.


We did this before with COM/CORBA, and then 'scaled to size' with DCOM and CORBA. And yeah that went awesome. So many machines with the potential of having the wrong versions, or expecting certain results to not get them was catastrophic. Not to mention so many teams would re-work their components and all hell would break loose in seemingly unrelated parts of the app.

The more things change, the more they stay the same.


Microservices move tranasactional concerns out of the database and between service boundaries. If you ever need to transactionally write data to two different microservices, you need to roll your own rollbacks. And unless you store this rollback state in a reliable datastore, there's no going back. The tooling to do this properly, as far as my research goes, does not currently exist.

I much prefer horizontally scaling a big old monolith backed by a sharding database like CockroachDb or Cassandra. Same scalability and you get to keep some ACID semantics.

Concerns about code size are overblown. Facebook's mobile app is utterly massive and it still runs mostly okay even on weak devices. The maximum practical code size for a monolithic server side app will probably never be reached. We're talking maybe 100 million LoC before you run into real problems, especially if you use a VM lanuage like Java or C# where hot and cold code can be dynamically swapped out and code is stored in a very space-efficient fashion

When you reach the scale where the codebase size is an issue you've probably already several rewrites to deal with such massive traffic volume.


Facebook's mobile app is utterly massive and it still runs mostly okay even on weak devices.

I agree with your comments but this is a bad example. The Facebook app is horrible to use on high end devices. The mobile website is much more performant and has most of the core functionality.

The app is actually worse in one way - they chose to use the iOS WebView that doesn’t support the native ad blocking framework. So when you click on an external page, it’s usually horrible - bacause modern web.


And let’s face it - you are never going to get your own rollback logic right. There will be simply too many edge cases to worry and test.


Yes. That's why I'm amazed there isn't some Zookeeper ish system out there for supporting microservices.

The fact is that microservice tooling is in it's infancy, and shouldn't be used in production unless you're willing to roll your own everything. I worked at a place that tried to use microservices+event sourcing+CQRS, predictably a massive disaster.

I still think the monolith+distributed database will win out . I've never heard of a time when horizontal scaling was a problem not related to the databse


I often read about (Silicon Valley) companies saying things like "we have a thousand microservices" or "we have hundreds of microservices" and I really can't wrap my head around what all of these services do. As someone who has primarily worked on monoliths, service-oriented architecture seems like an intelligent idea to me, particularly with regard to scaleability. However, I can't see getting so fine-grained that you have that many services, especially considering that from what I've read each microservice should be backed by it's own datastore. Perhaps its just my own lack of first-hand experience showing, but it seems like it would eventually devolve into a confusing, fragmented, and non-performant mess.


What you need to keep in mind is that micro services have a huge fixed cost but the marginal cost of adding a new one is small.

In other words: Creating or migrating to a micro service architecture is very expensive but once it’s in place, adding a new micro service is trivial. In Zalando the goal is that you can do so within half an hour, that is take an idea, implement it and deploy it to production.

This leads to architectural decisions no sane person would make at a smaller company. For example let’s say you get data from an external company via SFTP and you make it internally available, so what do you do?

1. You create a micro service that polls the SFTP of the supplier for new files and sticks them into an S3 bucket 2. You create a micro service that takes the files from S3, parses, transforms into JSON and publishes events to Kafka. 3. You create a micro service that takes those events enriches them with some other data and republishes new events. ...

Especially when you use Kubernetes, you also start thinking about infrastructure completely differently. Why use multiple threads or processes for example, when you can just run multiple instances of your application very cheaply and temporarily run further instances to handle background tasks?


And after 1 year running like this you have 50 micro-services by engineer. All of them undocumented, poorly named with unclear responsibilities using the bus to communicate with god knows who... I might be old fashioned by I would take a monolith over this over-engineered mess any day of the week.


In Zalando, every PR made must be reviewed by at least one other engineer. Every application must be registered which involves giving it name, a description and associates it with the team owning the service. APIs that cross team boundaries must further be reviewed by an API guild, a group of people that are experts in API design, this also applies to events.

Events go through Nakadi, which enforces a schema and authorization. You can further tell which services are subscribing to such events and through the before mentioned application registry who is responsible for them.

Additionally compliance with various laws and shareholder expectations requires regular reviews, to identify and fix issues such as missing documentation, monitoring, SLOs, data storage restoration tests (in production), load tests etc.

The scenario you describe is not possible or allowed. As awareness and adherence to these rules is also part of performance reviews, it’s also not in any engineers in interest to do that.

You’re right of course that one needs to be aware of that.


At Simacan (also a Dutch company) we have exactly the same architecture. All of the deployment code is duplicated, leaving only the business logic to fill in. Fast, easy, simple to understand and scale.


I have seen companies saying they have a thousand microservices to mean that they have a thousand microservice servers running, but they might have hundreds of application servers in each service pulling from the same queue.


Er, but the description of a “well-architected monolith” here is a set of loosely coupled components (with specific criteria for where you at component boundaries) communicating over an external set of message queues, which, while it doesn't have all the features which define a microservice architecture, is itself a widely documented, well-known, older (heavily documented in the 1990s, possibly older) alternative to monoliths. I suppose this is being called a monolith because everything except the message bus is implicitly jammed into a single process rather than each module being in its own process. But, even aside from the modularization being different from a monolith, bringing messaging out of process makes a huge difference compared to even a modularized single-process solution.


I'd argue you don't need an external message queue. You can use inprocess ones or an Actor based system.


I'd argue the message queue is, itself, likely not needed. Not directly.

I think, ideally, for a system to be successful, you need to understand all places that you either build up messages, threads, buffers, whatever. It ultimately doesn't matter which you do. Just don't do too many. And don't make migrating a solution from one type to another part of your critical path to launch. Logging, in particular, is something I've grown tired of people reinventing before they have even launched.


Obviously, you don't need an external message queue to have a functioning system, only to have one that meets the article's definition of a “well-architected monolith”.


Play framework has support for this via akka and it works quite well from my experience ^^


Can anyone recommend good examples of monoliths in open source? Preferably in Go?

I inherited a bunch of micro services at my current job with a fairly small team. My feelings have been that the code had been prematurely micro-serviced considering the team is so small and every service was mainly just CRUD in front of then same Postgres instance.

I’ve slowly been demonstrating the benefit of modularlized “sub-apps” in a single monolith over lots of microservices that all reinvent the wheel. And I think I’ve convinced them that this is easier going forward. But I’m at a loss sometimes about what boundaries I should be putting in place such that we don’t end up with a ball of mud in two years.


Not in Go, but Discourse is a monolithic rails app on Github which I've used as an example for my own employment's code base in the past. Another example is Gitlab's code base (also Rails).


Every Rails app is a monolithic Rails app


Not necessarily, but generally (99%) yes. It's both an advantage and disadvantage of the framework. Use the right tool for the job :).


Not at all true - plenty of rails app's are just a REST API for a service


I think what he meant is that Rails as a Framework doesn't push you to implement modularity internally. The convention chosen to organize your files pretty much guarantee a monolith.


Kubernetes is a pretty large open-source Go project: https://github.com/kubernetes/kubernetes

Docker is, too: https://github.com/docker/docker-ce


Is Kubernetes really a monolith ?

It isn't one codebase and when deployed there are multiple daemons running.


People confuse monolith and monorepo all the time. If you have a monolith where each independent module communicate via queue, like described in the article, you are so close to having a monorepo of horizontally scalable micro-services.


Kubernetes always gets brought up but I would argue that while it is large it is not simple and was not architected to be understood by novices to monolith projects. The goal of kubernetes is to be kubernetes, not to be a pristine learning example - and though you can learn from studying it it is probably not the best place to start learning.



I recommend you to have a look at Django and Rails for inspiration. People love to hate them, but they get the scalable monolith just right in my opinion.

Speaking about Django, the framework provide core features (HTTP handling, database connections, ORM, templating...) that apps (what you call "sub-apps") can leverage. A Django project is just a collection of apps, either yours or third-party.


> look at Django and Rails for inspiration. People love to hate them

You'll always get some portion of the community hating whatever's on top. Even in 2018, if your startup isn't using one of these two, you'd better have a damn good reason. Rails, and to a lesser extent Django, are the absolutely boring but productive as fuck workhorses of any startup. Bypass them at your peril when your project passes in front of competent due diligence.


Upspin is a monolith, I think?

https://github.com/upspin/upspin

Also, my personal site is a medium-sized monolith:

https://github.com/shurcooL/home


Linux source is a monolith as I understand. I dont know about Go monoliths


Monolith is usually described in the context of a single binary application.

Linux isn't a monolith. And it isn't a monorepo either since Linux comprises far more than just the kernel.


If one wants to get technical, Linux is only the kernel, which is why Richard Stallman pushes people to refer to most of the complete operating systems that use it as GNU/Linux [1]. And Linus Torvalds and Andrew Tanenbaum had a very famous flame war [2] over the issue of monolithic kernels vs. microkernels, in which Linux was the monolith, and there are a lot of parallels between this article and the design considerations at stake. Not all of the advice here applies well to kernels, IMO. But one can certainly say that from a practical standpoint, Linus Torvalds has a few reasons to feel that he and his monolithic kernel won that particular debate.

[1] https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy

[2] https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...


Linux is famously a monolithic kernel.


Linux is the kernel, and is different from GNU, as Linus thimself puts it:

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. "

( https://en.wikiquote.org/wiki/Linus_Torvalds#1991-94 )


It’s fashionable just now to hate monoliths, so much so that developers would probably be afraid to suggest anything other than creating a suite of microservices for fear of sounding backward or inexperienced. It’s a bit like if anyone ever dared to suggest that they don’t adopt Agile. You’d probably be chased out the room by an angry mob.


It's fashionable right now to like monoliths. It's "thought leaders" trying to one-up other "thought leaders," contradicting for sport. And who cares anyway? Software fashion is cargo cult nonsense pushed by people who probably struggle to ship anything that's robust or reusable. They're obsessed with keeping up with "thought leaders" on Twitter and Medium. They fly to speak at conferences and make a big name for themselves while the rest of us are busy grinding and limiting the damage they can do.


It may be fashionable to like monoliths on HN, but microservices is still the hotness if I judge by the recruiting emails I get.


Exactly, microservices are mainstream and boring now. Designing around services? You must be a pointy-haired boss!


Cult Driven Development. That’s where we are now. I think it’s one of the things that puts me off the job most these days.


I've been seeing more and more backlash against micro-service architecture recently. Seems silly to me. Developers should match the architecture to th problem and adapt as needed rather than sticking to an ideological pattern.


But "monolith" is a concept of bad stuff so everyone must get away from it, apparently. The thing is, they're just throwing this word around, before the micro-service trend "monolith" to me meant software with no boundaries, modules, design, like it solidified in a bad state, to which the antidote was design. Seeing this word being used to describe "not running micro-service architecture" gives me feels of lobby/marketing manipulation in the craft, this is plain bullshit. If the application is well designed, it is not a monolith! If it's well designed then it has well defined pieces and well defined API then if you need to micro-service anything you can just plug it out, it still is just a fucking API, you could be calling it by pigeon carrier, it doesn't matter. Having basic software development concepts being pushed, or positioned, as trendy pricey buzzword-manager-throws-around hot-architecture concern is nothing more than a corruption of the craft.

Damn fools.


Exactly!


So... keep modules separate, use queues and databases to communicate between modules, make sure the queues and databases are a separate process from the monolith.

This sounds a bit less like a monolith and a bit more like a bunch of microservices that you've welded together in one container. Which is great! If you're disciplined about maintaining that, it will be very easy to spin off these individual modules as their own microservices when the time comes.

But I'm not sure that this constitutes an argument for monoliths > microservices.


What is presented here as a monolith is, however, what the word "monolith" means in current design conversations that I hear in workplaces. Also, I don't think the article was trying to advocate monoliths > microservices, but rather more like "it's ok to use a monolith if that works, and at first it probably will". Because true microservices introduces a lot of infrastructure work at the very beginning, when you desperately need to be working on code for your basic product.


What's a monolith anyway? At this point just a straw man to pitch microservices against. There are many ways to modularize code; after all, (class-based) OOP is about code organization via modularizarion. In my experience, "REST microservices" is one of the worst, as it derives its working principles from the unrelated problem of loose coupling of web browser and web servers just because it has to use HTTP for unclear reasons, then applies these principles incorrectly, naively, religiously, and dogmatically, while mimicking a two-tier application model that isn't called for. It's really the mother of all antipatterns.


@honest_update: We replaced our monolith with micro services so that every outage could be more like a murder mystery.


Capitalism magic, rebrand & profit.


Worked for a place recently (and briefly) that claimed to be building a platform on micro services. When I got there I found that while they indeed had a bunch of independent processes hanging around they all relied on a single data source. All introducing micro services accomplished was a LOT more network overhead between processes and the main (SQL) data store. When you have a bunch of processes talking to the same data store, you still have a monolith whether you want to call it that or not, because you lack the freedom to refactor your schema without changing every service that relies on the store.


Ah yes, the distributed monolith antipattern.


I agree entirely. Microservices have advantages but they also bring along severe complexity. Many problems can be cleanly modeled as a monolith - and they should be. No architecture pattern is immune to abuse.


> Microservices have advantages but they also bring along severe complexity.

Worse, often the complexity they bring is undocumented because it lies in the relationships between applications, accounts, infrastructure, and so on, and does not lie within the boundary of a single application. Microservices are simpler to write because you can pretend a lot of the ecosystem does not exist.


On top of that, microservices often suffer from inadequate tooling. Companies like Amazon have been working on tooling for their services for literal decades, most companies have some half-assed attempts at tooling and don't understand why that's a problem. Consider deployment, for example, how do you know what's deployed at any given time? Well, the answer is usually something along the lines of "oh, that's just the head revision of all of the relevant repos/branches". And what about the staging and dev environments? And what about logging and history? How much work is it to figure out what code was running when certain bugs were filed?


That sounds like a monitoring problem and nothing to do with microservices vs. monoliths. If you have 10000 replicas of your monolith, how do you know what version they are running? If you can answer that question then you can also answer it for 100 replicas of 100 difference services.


The advantage of microservices is the ability to scale, both in terms of system performance and feature velocity. They are worse at basically everything else.

Any system architecture needs to consider the specific case, there’s no best pattern for everything.


If microservices are worse, may be we have not architected them well. For eg, if two micro-services are chatty, may be they need to be together to begin with. Micro Monoliths are good compromise :)


Even if they aren’t chatty, microservices are worse because you’ve turned a local procedure call into a remote one and now will run into all the issues faced by distributed systems including unreliable communication and network overhead. Not to mention the added difficulties in debugging and so on.


Yes. Microservice is only micro by context.


I run into the microservices obsession very often in the high-growth startups I advise, so often that I wrote Enough With The Microservices (https://adamdrake.com/enough-with-the-microservices.html).

I typically see people try to push microservices as a "best practice" for improving the delivery rate of software teams. They carve the codebase into separate services, but still leave all of the persistence in place so that all services are communicating with the same DB(s). The result is a tangled web of interdependent services, plus additional tooling like Docker, Kubernetes, etc. that actually makes the teams even slower than they were before.

I'm glad to see us as a community getting more pragmatic about these topics, and realizing that _best practices_ are highly context dependent!


My angle on building products: Monolith to market.

Break it up and optimise where the sticking points are as customer base and feature set matures.


It's not that simple, and that's the problem with microservices. You'll learn that "break it up" means a huge investment in devops, logging, stats dashboards to understand what the problems are, etc. It's like transitioning dimensions from 2D to 3D, and that's the trap that most people fall into.

You cannot survive microservices without real-time dashboards and proper logging to understand the health of your environment, whereas with a monolith, it was fairly obvious because there's the server and usually a database, and that's it.


All true about monitoring and logging. Is is a huge cost and overhead. Consider it the other way around: micro-service to monolith. It seems to me it is as difficult, if not more difficult. Potentially different language or different build system are used. Different framework might be in place. Sometimes a merge is completely infeasible without a total rewrite. So going from micro-service to monolith is like going from 3D to VR. Splitting seems in a way way easier than merging back.


Totally agree it's not simple. But monolith in early stages enables a must faster change cycle from early adopter feedback since there's less wiring up to consider as features change.


Totally agree with this. No idea why anyone would do things different. Other paradigms are cool and fun, but if you need to get your prototype out there now, this is absolutely the way to do it.

I do the same thing with the components of the monolith. I'll use Pyramid + SQLAlchemy + WTForms all day long with a reasonable separation of concerns. When I hit a real problem where the ORM is causing a bottleneck, hey, that's a good time to drop the ORM and use stored procs.

Same thing with almost any element of the web app. When it's a clear bottleneck, drop one layer of abstraction and move on.

This will get you excellent performance without the overhead of an SPA for 99% of your use cases.

Design your way around page refreshes, and judiciously apply ajax calls, and you're done.

If you aren't one of the big 5, this is good enough, even with a slow language like Python.


I agree with this. However, for whatever reason, suggesting a monolith or (gasp!) not writing a spa for an app and using traditional ssr html seems to signal that you're antiquated and old fashioned. I've only been doing this for 3 years and it's ridiculous.


Yes this is the problem I am am facing just now (been doing this for 15 years). I care more about writing decent code than chasing trends, but it means my skillset looks stagnant.


This still looks like a micro service architecture to me, or very close. A monolith solution would probably be everything in a single process (1 tier). Queues decouple services, whether the code base includes the consumer & producer together or are separate doesn't matter, they are still deployed separately.

It's definitely not black/white, but a spectrum.


So, in micro-service architecture services don't have to be particularly small. They also don't need to actually be services. Do words not have meaning any more?


Microservices pundits have strategically (or accidentally) engineered the hate for non-microservice design. They chose the term "monolith" to disparage anything that wasn't SOA. "Monolith" has a natural negative connotation so right there, you're winning at the "tech pissing fest" at the bar after work.

The fact is that "monolith" applications can scale, and they can be deployed as a unit and they can still adjust to varying load using modern cloud infrastructure.

The enemy to scale, productivity and success isn't "monoliths". It's complexity! And an experienced and pragmatic developer will tell you that they fight daily to avoid complexity at every turn. Yes, sometimes it's necessary, but they weigh the pros and cons on a case-by-case basis to ensure that the gradient to complexity is always as low as possible.


And, how is complexity caused? What are the origins?

It is caused by bringing together unrelated things.

It is caused by violating the single responsibility principle.

So, we isolate state from communication. We separate interface from implementation. We see as many, where others see one. We try to see the network of dependencies, where others see a single line of code.

Using this new perception, we find that the large application entangles the change of (code) state of one part with every other through its deployment. The static typing guarantees only validate one specific instant in time, but nothing about the future or the past.

So, take a step back and see the deployment of code as a functional part of the code. Now the monolith forms complexity and the microservice isolates it to the minimal surface.

Take it one step further, and we see that the actor model is the exponent of this idea. Late binding, late dependency tracking, programming by behavioural contract.


>And, how is complexity caused? What are the origins? >It is caused by bringing together unrelated things.

It is also caused by wrongly separating things that are closely related and putting a network or something similar in between.

You cannot reduce software development to a formula.


No, so saying outright that microservices are better or worse is just that: a formula.


No, its a categorical consideration made without the needed context. Basically an opinion.


I think we're just bad at predicting the future. You can go go fast and sling code at the wall because the odds of ever reaching FAANG scale is low and you want to be pragmatic, and then one day you accidentally do hit FAANG scale and you're in the news every week for security breaches and outages. You can also do everything DRY and micro-serviced from the beginning, run out of runway and crash. Or you could get to FAANG scale and realize you need to rewrite everything because you didn't foresee X, Y, Z requirements and technology has shifted 20 yards to the left. My approach is the gardener mindset. Think about organic growth up front when writing the code, but also be pragmatic and start with a "minilith" with interfaces in the right places.


if you keep yourself aware from day one about the issues in your codebase/service and what doesn't scale it is not that hard to switch from monolith to microserviced is the point of this article i think. there's a difference between a conscious decision to ship a monolith and code quick "ship it now" monolith most companies won't reach FAANG,they'll be lucky to even reach preferred local service


"Monolith" literately translates to "Einstein" in German, which can't be that bad. Or can it? ;-)


I think there’s something that people mostly forget. Most of the times we are compairing microservices with monoliths but forget that almost all monoliths are much older than whatever microservice someone have recently built. Also whoever built those mostly have the luxury of not making the inevitable mistakes you did when you built the monolith. So saying that microservice is better than a crappy monolith is kind of not true. You can argue all you want about whatever style but anything well build is still good. And it’s easy to fuck things up in both scenarios.


Undoing microservices at my current job has been nothing but positive.


Both my current employer and last have fallen into this trap. Trying to do microservices with a small team and moderate to low complexity. Both built distributed monoliths with a shared core and were spending lots of money keeping it from falling over. They incur all the costs of microservices with none of the benefits.

Microservices are a siren song for many small/medium sized companies. What they need is clean separation between subdomains (composing domain level API functionality together in the user level APIs to implement business logic) and clean separation between domain and infrastructure code. There are a lot less complex ways to achieve this without microservices.


Of course they are OK. It doesn't matter it is microserivce or monolith, what matters is how well your architecture executes the principle of separation of concerns. That is largely irrelevant of what philosophy you are choosing.


I don't get it. The article argues that a "good monolith" is composed of a code base + RabbitMQ deployed in Docker. The code base probably needs a database too, let's say MySQL. You probably also need to serve static assets (JS and friends), so an nginx container might be handy.

Isn't this already an application composed of four micro-services: three micro-services are 3rd-party, while one is developed in-house?


For anyone trying to put together a well architected 'monolith' using C#, I can recommend https://github.com/realvizu/NsDepCop, which allows you to set up far more granular and useful dependency rules within your code than standard project references.


The famous J2EE + ESB style pattern.

Many of us have worked on dozens of these before and sure they work but microservices were invented specifically to address the serious concerns with these patterns.

1) It's far too easy to create spaghetti code that spreads across multiple modules. And because they are so intertwined teams are very reluctant to do major refactors. Especially when the original authors eventually leave.

2) It's less flexible at scaling. In a monolith you have to horizontally scale the entire application behind some load balancer. So if you have an Email concern that needs to handle 100x more load what do you do ?

3) API contracts are far easier and safer to evolve than ESB ones. There are also clean mechanisms in APIs for handling versioning, capability discovery e.g. HATEOS, documentation e.g. Swagger, rate limiting, caching, analytics, billing, authentication etc.

4) Microservices are much easier to reason about, test, document, teach new starters about and most importantly replace. If I replace a microservice I just need to verify that the API contract is the same. In a monolith I need to basically retest everything since there is no guaranteed isolation.

I could keep going on. But not sure this blog post offers anything particularly insightful in how to deal with the negatives of a monolith architecture.


Is this a parody or are you being serious?

Do you honestly believe this drivel?

Take the 'much easier to reason about'.

A monolith is simple to reason about. You can literally see the whole call stack in your favourite debugger.

It's so simple to reason about, yes? Do you understand how? How a simple breakpoint gives you the whole program in a monolith.

That's literally impossible in a micro-service setup. With a microservice setup even the most trivial of bugs become an utter nightmare to debug.


> With a microservice setup even the most trivial of bugs become an utter nightmare to debug.

Alternative view: It's not.

I get your point but with a well designed microservice it's pretty simple as well. You just need proper logging in production and a good development setup.

Complex issues involving multiple services are a lot harder, I grant you.


Having extra code in the binary is not a serious scaling issue. You can reserve capacity for specific endpoints with an L7 load balancer, and flip config flags to run different worker processes on different instances.

Microservices are not about scaling the runtime, they’re about scaling the engineering organization and deployment processes so that teams can iterate and ship independently. With thousands of engineers committing to the same artifact, the probability that at least one commit is bad (blocking rollout of all others) approaches 1.


It is a massive scaling issue. Most enterprise JVM apps for example are running with 8GB+ of RAM. Adding 10x the capacity which is only needed for say sending emails can be hugely expensive or sometimes impossible if you are running on-premise.

And scaling different parts of the application is just one of many benefits of microservices.


I would think that the point of writing a single process monolith with components communicating via a bus like this is that it’s a small step to break out one of those components into a separate process and scale it horizontally.


It seems that apart from scaling your concerns boil down to enforcing discipline? Perhaps there are methods of enforcing discipline that don’t involve serializomg and deserializing a bunch of data several times per request.


Bingo! Agreed. You’ve summed it up brilliantly. Other than the scaling advantages, micro-services mainly thrive as a method of strictly enforcing separation of concerns. Maybe that’s a good reason to use them. Or maybe there are other ways to do this without incurring the management overhead of micro services. Who knows. Also, I’m not entirely sold on the horizontal scaling advantages either. Are you building a website that’s going to have to scale to millions of requests per second at the click of a button? Probably often not.


"Strictly enforcing separation of concerns" is not a great argument. If feature teams don't have the technical maturity to handle separation of concerns, they'll make an even bigger hash out of the extra resiliency, testing, and other overhead introduced with microservices.


> serializomg

I realize this was a typo but first thought it was an awesomely-named serialization library :)


I read it as an opinion on the cost of serializing


It wasn't. It is now though.


Brilliant :-)


Monoliths have been around for decades and much smarter people than me have been thinking about this problem.

Unfortunately there is just no way to enforce discipline because monoliths sometimes demand unintuitive behaviour. Often it is better to cut/paste code than it is to link from one module to another. And don't forget that when a deadline hits developer are often forced to do things that aren't always architecturally sound.

Microservices forces isolation between modules.


Perhaps I should have said "there should be methods of enforcing discipline". As in, "we should make/find them".

In .NET you could compile individual assemblies separately and use technical measures to prevent developers from one component from touching code in the other component. The external surface of the assembly would be its API, only no network traffic is required. Does this give us enough discipline, in your opinion?


> Often it is better to cut/paste

Micro service surely put the dream of code sharing to REST. Not sure it is a good thing.


One can create an equally long list of problems with micro services.


It’d be way longer.


Re:4 you can absolutely test and reason about modules in a monolith if you design and decouple things correctly. You are throwing a new complex and fragile architecture generalisticaly to fix sloppiness. You will have worse problems I assure you if you can not correctly modularize a plain classic application correctly, no matter how many MS you throw at it.

To clarify. I don't think microservices are a bad thing, but monoliths are many (most) of the times perfectly good.

Also, I think the middle ground is the better option. You can have services but they don't need to be micro.


Refactoring code is March easier to do if code is in one place and you can run and set breakpoints. It is nightmarish to do this in distributed systems especiallly when minor api semantics can cause huge problems that can go undetected for ever.

Lastly Don’t blindly believe some blog article - they are likely written by unemployed folks with too much time on their hands (possibly using the article to find their next gig)


Hmm, I would disagree with some of your points, or at least suggest that similar if not worse problems can arise with microservices:

1. Large-scale refactoring is easier when you have a single build artifact that you can test and deploy. Refactoring a microservice architecture can be significantly harder if that refactor crosses microservice boundaries (say, if you're repeatedly doing the same thing across multiple microservices and want to extract that behavior). Microservices do make it easier to do refactors that don't cross service boundaries, but that assumes that you chose good service boundaries in the first place, and if you can do that, you can also modularize your monolith well enough to make those small-scale refactors easy, too.

2. While this is a valid point, not only do you not have to go to microservices to address it, but using microservices naively can get you worse results. Email isn't a synchronous task, and you should be dropping it on a message queue for an offline worker to pick up anyway.

For different feature concerns that are online and need to happen synchronously with requests, it can be handy to route different endpoints to different groups of servers that are scaled and optimized independently of each other. But that doesn't actually require microservices; you can just deploy a monolith that way.

3. Most of those mechanisms only exist because coordinating behavior over a distributed system is inherently harder. When you're calling other code in the same process, you have much firmer guarantees about latency and availability. API contracts within a service don't have to be monitored, metered, or rate limited the way service endpoints do. You don't have to worry about serialization and deserialization--not even from a performance standpoint, but from a reliability standpoint. And you don't have to worry as much about input validation if you're using a strongly typed language.

4. A microservice is easier to test. A full product that is composed of microservices is harder to test.

Testing a microservice architecture entails building as many of the microservices as are necessary for a specific piece of functionality, deploying all of them to a shared test environment (or a set of shared test environments that are configured to interoperate), and configuring the microservices to communicate with each other within that test environment. And it's very unlikely that you can test a single feature across your microservice architecture in isolation unless you have at least as many separate, isolated test environments as you have features under active development. Additionally, since these test environments are expensive, they tend to be long-lived and accumulate various operational issues. Functionality that, within a monolith, could be tested within an isolated, repeatable build process, gets punted out to these dirty and unpredictable test environments in a microservice architecture.


> a complicated microservice architecture

> Microservices are complicated to develop

> Microservices dependencies are difficult

Well, if you say so! Kind of hard to take this article seriously when the argument boils down to a tautology.

Microservices may be hard! This article contributes nothing to that conversation, though.


We need to define - what is a monolith before concluding - Well architected monoliths are okay.


This is quite funny. People are basically saying 'think about your problem, do not follow the hype' and that feels like a revelation. Says something about state of the industry, doesn't it?


It is also difficult to keep a monolith as "well architected" in reality. No matter what, we should start from a "well architected" monolith first as the last part of the article says.


Didn't he say he runs the async queue consumer in another process? Doesn't it make it practically another "micro service"?


This sounds like a great way to get most of the problems of monoliths and most of the problems of microservices at the same time.


A reminder that the space of architectural possibilities is not merely a binary domain of {microservice, monolith}.


What "well-architected" system is not okay?


And arbitrary micro service boundaries are a disaster.


> they let development teams focus on delivering features

I agree, but later on, when the team and the codebase grows, you'll need to split it into smaller parts, and microservices (or any other similar architecture) gives you some guidance that can be shared across the team/s making architecture decisions more consistent and providing a common framework that improve reusability, but it's not an easy path!


If you try to differentiate monolith acceptability based on "quality of architecture", you really have never understood why they are hated in the first place.


No architectual pattern can survive a lack of maintenance and grooming. Sprawling microservices written in a dozen different languages can be worse then a mangled monolith.


I always say if you can't design a monolith then don't bother with microservices. You shouldn't use microservices to cover up for a lack of design skills or a lack of communication between teams.


Every man is an island in the world of microservices.

Look, my microservice is clean and nice and has a 100% test coverage, I couldn't care less if you can't communicate with it from yours. Solve it somehow.

Now get off my lawn and let me rewrite the whole thing in scala.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: