Hacker News new | past | comments | ask | show | jobs | submit login
The Majestic Monolith (signalvnoise.com)
304 points by davidgl on Feb 29, 2016 | hide | past | favorite | 91 comments



I think there's an important distinction to keep in mind between the "dependency monolith" versus the "change-control monolith". They're somewhat orthogonal.

For example, you can have a monolithic-repository (SVN, git, whatever) full of sub-directories housing mostly-independent projects. Conversely, you can have many repositories, with each containing a module that only works when present with all the rest in a monolithic-runtime.

Currently I'm working with some corporate internal systems that deploy new versions every couple weeks, I'm trying to work towards the former model: While we want to avoid a big-ball-of-mud, the overhead of wrangling a dozen repositories with different versions/histories/merge-operations seems wildly unnecessary.


Good point. Language-level modular dependencies are one way to divide an architecture. Networked services are another.


A useful term here is 'unit of deployment' I.e. a chunk of software that can be upgraded and redeployed on its own (implying versioned communication with other units and data storage).


Summary:

1. The term "Monolith" has a bad rep.

2. "Majestic Monolith" means eliminating needless abstraction and avoiding distributing your system (unless unavoidable)

3. The term also means writing "beautiful, understandable, and succinct code ... that makes us smile when we write it ... and later have to extend or patch it".

4. Everyone has to understand all parts in the "majestic monolith".

5. The "Majestic Monolith" puts pressure on / incentivizes programmers to keep the code-base clean.

6. Most programmers will rise to the occasion.

I think most programmers would agree with the meanings/goals ascribed in 2-3 (possibly also 4, if 4 is understood as deeply understanding the structure of your software product), as just generally good rules for building software in general.

1, 5, and 6 seem highly subjective to me. I tend to think if you're building anything stringently according to 2-3 you'd probably be fine and could call that method whatever you wanted to.

There were some interesting ideas in here, regardless of whether or not the argument for the majestic monolith succeeded.


I think he started with this (good) principle:

  >an integrated system that collapses as many unnecessary
  >conceptual models as possible. Eliminates as much 
  >needless abstraction as you can swing a hammer at.
And then went on to conclude that the 'monolith' is the only way to do it.

There are ways to eliminate needless abstraction while still keeping proper separation of concerns


Separation of concerns is not dependent on having a network layers between your function calls though. You can have a well structured monolith with different inner well separated parts. It's just easier to cheat in code than over the network.


Ultimately separation of concerns is an organizational and social problem, not a technical problem.


Technical enforcement beats "best practices" every time.

(But you can enforce separation of concerns through a language that lets you do that rather than needing a network connection)


>Technical enforcement beats "best practices" every time.

Not in my experience. People will find a way to beat even the best "technical enforcement" every time. Not necessarily hack it -- just work around it to move the problem elsewhere.


There's no way to stop people working around things. But an automated warning is far more reliable than constant vigilance on the part of the programmer.


Even more reliable is to keep your programs small :)


Not my experience.


While 1, 5, and 6 seem highly subjective, 1 certainly matches my experience talking architecture over the years and failing to do 5 certainly has lead to the death of more than one project that I've been involved in.

6 is only true if you've hired the right people :-)


6 is only true if you can afford to hire good people.


If you can't afford/attract the good people, they will learn in your business e.g. make failure which you have to pay for.

I think everyone makes this failure early on in her/his career and it's only avoidable if senior developers can teach newbies in a way that makes sense and is understandable.

In the end, most people come to the conclusion that programming languages are less important for success but bad habits and complexity can kill everything. Even with frameworks who most of us tend to rate as "easy to learn" I've seen people completely messing it up. There is no language and/or framework that prevents people from mistakes AND allow them to learn/work productivly.

There is no holy grail. It's part of our job to justify technology problems and the usage of (new/old) solutions to fix the problems. Ignoring the past and/or ignoring the future (= trend) is a mistake. Learn to question your tech things regularly without questioning yourself.

Sure, it's easier for one to "ride the wave" of a hype. Container everthing, microservice everything. Our tech landscape grows complexity even faster than a couple of years ago. Somehow many people love complexity because they don't enjoy writing "stupid business software" and want to be some mad architect that rewrites everything in his/her favorite niche language with 10 new layers of indirection just because…


>Somehow many people love complexity because they don't enjoy writing "stupid business software" and want to be some mad architect that rewrites everything in his/her favorite niche language with 10 new layers of indirection just because…

These are the worst type of programmers. Complexity for the sake of complexity. They will replace a ten line Python script with a Python "app" made up of multiple classes. Each class containing its own generator implementation because using loops or recursion is for noobs...


...Or if you can inspire them with your mission. For example, Watsi is a non-profit, yet the caliber of their engineering talent is incredibly high according to pg. I can't find that quote, but here's another quote along the same lines, from the co-founder of triplebyte: https://twitter.com/Harjeet/status/684478156716150784


If you have bad programmers who are pushing hard on a microservices architecture, you're probably in for more trouble than bad programmers maintaining a monolith. Big, complex monoliths can be tough to work with, but probably overall easier than poorly-designed microservices.


1, 5, and 6 all seem quite objective, although whether they are accurate is debatable. They aren't given much empirical support here.


My concern with this is embedded right in the article:

    One of the benefits to the majestic monolith is
    that it basically presumes that the people who
    work on it also understand it.
I don't consider this a benefit, it is instead a substantial risk. And it is something that is almost impossible to maintain indefinitely as software teams grow, companies merge, new platforms or partners emerge, etc. The new developer tasked with understanding a monolith has a nearly insurmountable task compared to understanding a limited-scope service. Ultimately I think software architecture is more usefully approached as strategies to prevent disaster rather than creating the opportunity for majesty.


Anyone who truly writes majestic code will write good whether it is a monolithic project or a distributed project. Some of the code in the Apollo project was truly beautiful, even though they didn't have functions.

I'm kind of doubting that this guy writes majestic code, I would like to see him put more emphasis on separation of concerns, which you should have even if you don't use a distributed architecture.

(Also, as you point out, "everyone understands every part of the project" only works for small projects).


Well it is a giant risk. ButDHH's entire shtick is to sell well-known bad ideas as "good enough." Usually by a combination of straw-manning (e.g., splitting up your app) and semi-hokey common-sensism.


I don't think you and OP are necessarily contradicting each other. OP explicitly says that if you are a big company planning to get bigger by doing lots of hiring or acquisition, his advice is not addressed to you, and you should take advice from big companies. He's saying if you have a small team that is going to stay smallish, that's a different situation that warrants a different strategy.


The problem is that "is going to stay smallish" is a prediction that has a very high chance of being wrong in the very long term. There's also an assumption that the organization retains its understanding of the monolith indefinitely, which even in a team that stays small can be lost due to turnover.


I envy your confidence of being successful!


>And it is something that is almost impossible to maintain indefinitely

Well, no software is possible to (or will be) maintain(ed) indefinitely.


It feels to me like the argument of this article can be boiled down to: "The Majestic Monolith is a Good Thing, since when you know you're dealing with a monolithic system, you know you'll be hosed if you don't take extra special care." But I don't see anything here arguing that such care is inherent to the pattern--only that such care is needed when applying the pattern. I see that as a caveat, not as a built-in feature.

I agree that with a small team it's much more feasible to keep the monolith well-polished. But that polishing is more a product of professionalism and discipline than something that will come along naturally with applying the pattern--which is orthogonal to whether or not the code is monolithic. Good patterns are supposed to be non-orthogonal to good discipline--that is, they should bring along design benefits as an artifact of the pattern themselves.


non-orthogonal?


If you move in the direction of the pattern, its direction should carry you at least somewhat in the direction that a good design would, where 'orthogonal' would not carry you in that direction at all. So, not orthogonal, even if not strictly parallel. My metaphor and non-metaphor are unimpeachable :p


Parallel? :P


We use microservices at my workplace with a team of 5 and have since the beginning. From reading HN it seems like we may be the only small team to have benefitted from the approach?

Some advantages that this article seems to ignore:

- Even with a small team, microservices allow us to make small iterative changes to services without touching other parts that are working as they should be.

- It's easier to find and patch bugs when they can be pinned down to a smaller codebase and fixed in isolation (most can be).

- Local development becomes easier - you can spin up just the services you need for the feature you're developing instead of the entire system.

- It's easier to get "up to speed" when you need to add a feature when you're working with small codebases (and often only need to touch one or two at a time).

- In our case we have three distinct sets of consumers: two fairly separate groups of end-users, and automated services (m2m). It's quite common for any issues we have to only affect one of those three sets of groups because they are largely isolated in different services.

That's not to say there haven't been some headaches, but I personally much prefer our service approach to a monolith. I really think the key is removing operational friction of working with services with tooling.


We've also benefitted from moving from a monolith to SOA-esque services. IMO, The thing that DHH is failing to mention is that Monoliths work great when your application looks something like Basecamp.

Basecamp, while a great tool, is a "simple application". I don't mean it's not well built, I don't mean it's easy to build, I don't mean it was fast to build, but if you look at what it does, it's all vanilla stuff.

Many other companies don't have the luxury of building apps that look like basecamp, and hence, our architectures are more complex.

When he writes these kinds of articles, they always feel somewhat strawman-ish.

At the end of the day, each software project has it's own unique requirements that will govern success. We got paid the nice salaries we do because it is up to our discretion to understand what is the most efficient architecture for what we are trying to solve.


> The thing that DHH is failing to mention is that Monoliths work great when your application looks something like Basecamp.

The thing that people fail to grasp is that (almost) everything DHH writes is in the context of "when your application looks something like Basecamp" :-) I've been following 37signals/Basecamp since 2002, I think (long before Rails) and they've always made it clear that their opinions/advice are _not_ universal, because they only speak from experience - _their_ experience. So, if you're not in a similar situation as theirs, then of course their arguments are going to look strawman-ish to you.


And while he does that to excuse himself from having to reiterate that later, the fact that he doesn't anyways leaves a reader by the end thinking "Damn, maybe he's right. Maybe all software should be this way." even when it shouldn't.


> ...they've always made it clear...

Always? How about in the OP?


That's a fair point, and for sure there are problems that demand a more complex approach.

What I do sometimes see as a problem is that people assume that their app necessarily needs to be more complex than Basecamp, and that arguing for simplicity and "plain old CRUD" approaches to problems is an impossible task.

The best way to manage complexity in a lot of cases is to reject the complexity in the first place. That's not always feasible, but there's a real cost to complexity even when well-managed, and if a "simple application" approach can deliver the same or even 80% of the value, it's often a trade-off worth making.


These days I code in immutable, functional, objectless clojure for the most part, so I'm with you 1000% on reducing complexity.

The easiest line of code to maintain is the one that doesn't exist. :)


Just curious what you mean by "if you look at what it does, it's all vanilla stuff".

What would "non-vanilla stuff" look like?

It's all just code, right?


If you look at their feature list (https://basecamp.com/3/features) you see: - Message boards - comment threads. - Real-time chat/pings. - Automatic check-ins. - To-do lists. - Docs & file storage. - A centralized schedule

Everything in this list sans real-time chat I would consider as low complexity, low volume and low velocity. It's basically request a view, hit a cache / db, display some data. Send an email now and again. Store some stuff on s3 now and again.

There are lots of sites that have more complex feature sets then what basecamp does. Things like complex transcoding of media, real-time updates on their views, streaming of complex media types, need for near-real-time integration with 3rd party APIs, real-time bidding, real-time decision making, real-time analytics, etc etc etc.

I want to be very clear that I am not criticizing Basecamp as a product. Simplicity is difficult to achieve in software and their product does a great job of doing most of what you need, and not much else. I based the software stack of the first company I founded on pre-1.0 rails monolith (mongrel had just been released), so I'm not even anti-monolith.

I just think this particular post of his is entirely too simplistic and could lead younger software designers astray.


Real-time chat isn't even that complex, especially now that Rails has ActionCable.


I think he means "all the things that Rails handles well with default options/configurations"


I'm perfectly willing to admit that I might be wrong here, but I don't really see a difference between a microservice approach and any other technique for abstracting out functionality. The only real difference is that you're using a network connection to enforce your abstraction. Back in the old days we use to use fork() and pipes for the same effect ;-)

The thing is that if you don't require multi-processing, you are adding complexity to your solution that doesn't actually exist in the problem domain. Not only that, but you are forced to nail up your API between the services. My experience has been that one of the most common causes of complex code is premature subsystem decomposition. Before you have written much code, you design the subsystems and nail up an API. If it turns out that it doesn't fit, the opportunities for refactoring are practically zero.

For some problems (and it could easily be the case for the problem you are working on), micro-services are really useful. Generally this happens when you require multi-processing. You create a service for each thing that requires a separate process and you build an API for communicating/coordinating those processes. You have to do that anyway, so it's a benefit to design the services (or whatever).

Generally speaking, though, it is beneficial to delay subsystem decomposition as long as possible (but no longer). This allows you to change your internal APIs with low cost and acquire more requirements. This significantly reduces the risk that you have to major squirrelly work-arounds. If you don't need another process, then you often split out reuse libraries as soon as their API solidifies. If you do need other processes, then micro-services (or other similar approaches) can be introduced.

On many teams, you often have developers who are incredibly confident that their initial designs will hold up over time. They are usually the person I am cursing when I show up 5-10 years later and have to wade through the jungle of bizarre work-arounds. For that reason, I tend to suggest to people that they should avoid micro services until they actually need them (which could very well be never).


I completely agree with your description of prematurely setting in stone the interface between two subsystems (which could be modules/libraries or independent processes.) If a chunk of functionality is identified as a candidate for a microservice, making an otherwise isolated library in the existing codebase is by far my preferred first step.

It lets you live with the seam between the two systems for a while, adjust responsibilities and communication patterns cheaply as they reveal themselves as pain points, and delay or avoid paying whatever costs are associated with deploying a new microservice (which at my current company, at least, are non-zero.)

Outside of a few examples where we've identified a simple and truly orthogonal set of responsibilities up-front for a new microservice (e.g. image resizing, push notification delivery) I have always regretted prematurely building a new service instead of extracting it from an existing codebase.


Just an observation that most of your advantages (4/5) are for the developers and their process of doing work.

Choosing an architecture based on your ease of working seems fine, until you need to engineer that architecture to be fault-tolerant, scale, etc. (not saying what you are doing isn't fault-tolerant, etc.) That is where micro-services and distributing your architecture can introduce more failure points in the underlying infrastructure than with a monolith.

It all depends on the problems being solved. My hope is people will architect a system based on the business needs as the primary factor and nice-to-haves would include making your work flow easier or trying new shiny things out.


Interesting - very much not my experience. I think part of this is a question of typed languages (and structured build systems) vs. dynamic - with a typed language you can have strongly enforced separation between different parts of the codebase without needing to split them across a network connection.

> Even with a small team, microservices allow us to make small iterative changes to services without touching other parts that are working as they should be.

My experience is this should be the case anyway. Having the ability to "find usages" in the IDE, and lots of compile-time enforcement, gives me a lot of confidence that changes haven't affected anything unrelated.

> It's easier to find and patch bugs when they can be pinned down to a smaller codebase and fixed in isolation (most can be).

True but not something I find microservices help with. Any decent debugging/introspection tools are going to give you better information than a network sniffer.

> Local development becomes easier - you can spin up just the services you need for the feature you're developing instead of the entire system.

Don't forget the overhead of the network layer though. Particularly if you're using service discovery (and if not you have a config problem), I find running two or three microservices locally is heavier than running a monolith that contained the equivalent of ten of those services, and it's a fiddle to start the right ones during development. As long as the codebase remains a size that can be comfortably run on a single machine (and start up reasonably quickly, and all be open in the same IDE at the same time, and so on), it's much easier to just run the whole thing.

> It's easier to get "up to speed" when you need to add a feature when you're working with small codebases (and often only need to touch one or two at a time).

True up to a point - there's an intimidation factor, but I don't think it's actually any harder to understand a folder in a repo of ten folders than a repo that's separated from the other ten. And separating creates a barrier to organically expanding your understanding. If you have distinct build modules then project encapsulation is enforced just as much as for microservices.


Congratulations! Those advantages you list are too big to ignore.

I think the biggest stumbling block to SOA is that failure states have to be considered up-front, and the responsibility of concerns have to be designed into the system up-front. Degrees don't train you how to do this, so engineers stick with what they know will never fail (function calls).

We've been experimenting with plugin-based architecture on some work projects, which force us to address separation of concerns (at least) while still keeping things in the same process.


I wish engineers stuck with function calls!

It's configuration files and objects that worry me.


Most of those advantages aren't microservice specific - they're just a function of well factored code.


It's only 'easier to spin up just the services you need' if you assume there is a cost to spinning up everything.


Agree with this fully because had experienced this first-hand.

We had a working monoloth, it was not the best but it was stable and it did the job. Then new CTO comes in and spreads the microservice hype. We built everything in microservices in such hurry and didn't even implement the most important part - the orchestration.

Now system is doing pretty much the same thing, except it's way less stable. But hey, we're microservices! (Also went from PHP to Node, which also IMO is a huge mistake)


I totally agree with the article but your example seems a bit flawed.If you rebuild anything "in such hurry and dodn't even implement the most important part...". Regardless of architectural choices you will end up with something "way less stable".


I've seen this happen twice - once with the microservices dogma, and then with the introduction of an ESB to a large organisation without thinking about the cost of the orchestration...


You can scale a monolith way further than you initially think. We thought that GitLab CI was a logical separate application. Now that we integrated it into GitLab we see that this mostly brought advantages. Working with 18 developers and 1000+ contributors on the same application works well. Of course you need to keep your code clean and the models well separated. But it you ship early and often there is very little coordination overhead.


I certainly agree to DHH's point. I did a talk at PHP World last year, with essentially the same message[1].

At VideoBlocks we're slowly starting to migrate our monolith into a service-oriented architecture where it makes sense. E.g. our new search service is now a standalone microservice. But I think that we're still very far off to an architecture that consists solely of microservices.

I'm a huge fan of the hybrid approach in the interim, it allows us to migrate slowly and only when it makes sense. This means that we've had plenty of time to establish orchestration and deal with all of the small infrastructure edge cases that other commenters in this thread has been struggling with.

[1] https://speakerdeck.com/minicodemonkey/microservices-with-la...


I like the idea of SOA. The separation of concerns feels right to me on some level. But, every time I look at it being applied, it's clear to me how much more money and time is being spent implementing it. With SOA you incur the additional cost and headache of orchestration into your system. If you're big enough or complex enough, this may make sense. But, I feel like a lot of dev teams applying this approach don't have the specific problems of scale that SOA solves.


Another good article on the Monolith

http://martinfowler.com/bliki/MonolithFirst.html


I too was thinking of this article as well. A team might not always know what the boundaries of their data & code classification will be ahead of time.


The "genesis" (perhaps) of the current rise of microservices might be the Yegge Platform rant and the 2 Pizza rule

https://plus.google.com/+RipRowan/posts/eVeouesvaVX

http://www.businessinsider.com/jeff-bezos-two-pizza-rule-for...

Wherein he describes Dread Pirate Bezos' edict:

  His Big Mandate went something along these lines:

  1) All teams will henceforth expose their data and functionality through service interfaces.

  2) Teams must communicate with each other through these interfaces.

  3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

  4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care.

  5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

  6) Anyone who doesn't do this will be fired.
and

  Amazon's Jeff Bezos has a solution for this problem. He calls it the "two pizza rule": Never have a meeting where two pizzas couldn't feed the entire group.

You'll note that nothing in there has to do with technical debt, or unmaintainability of apps, or test suite runtimes, or monoliths, or docker, or any sort of technical problems. Microservices are a technical solution to an organizational scalability / coupling problem.

All too often I see small, co-located teams (<10 people) adopting a microservice architecture because #reasons which are ill-defined.

Docker is awesome, microservices are awesome, but they come with a lot of taxes to pay, and tl;dr - until you know you need to pay those taxes, you should stick with a monolith.


And the wheel goes round and round...

I look forward to breaking down these monoliths again in 5 years time, same as 10+ years ago. Not every business decision is best informed as an engineering one - and vice versa.


That's the point. The right approach is to get to a point where your monolith needs breaking down: congratulations, you've been successful.



Interesting links, thanks for that.


I talk about this frustration a little here:

https://www.youtube.com/watch?v=-qHwL8C9UoA

and also (more pithily) here:

https://www.youtube.com/watch?v=zVUPmmUU3yY

There's nothing more frustrating than people telling you 'I wouldn't start from here' when you are the one that has written the productive software.

There are tremendous advantages to monolithic software architectures, but for some reason they are seen as impure. It reminds me of the Java/object orthodoxy of ca. 2000 when everything had to be a class.

Meanwhile procedural code came back with a vengeance, because it's advantages are those that can be used by people trying to write useful software on a budget.


If you practice separation of concerns and design your interfaces well, I feel the debate is moot. Who cares if your authentication (micro)service is running locally or on it's own remote cluster as long as it implements the same interface? Clients to the authentication system don't need to know how it works, just the interface. The team responsible for the authentication system can work in parallel and publish new versions as a library that can be imported by clients.

I unfortunately don't have the need for a distributed system right now. Does anyone with more experience care to elaborate?


People will defend and love Rails / Monolith a lot more if it was not the slowest framework out there.

And it is not that Rails don't scale, it is just very expensive scaling it.

José Valim https://twitter.com/josevalim/status/704320020432723968


I don't think that rails has anything to do with the architectural approach of M/SOA vs "monolith".


Even in a "monolith" architecture, you usually have client side HTML/Javascript, a web server, business logic, and a database. Unless you have some huge traffic level, that scales. You can replicate the database, load balance the servers, add caches and a content delivery network if necessary. Those don't need much development; you buy those.

When the business logic is getting really complicated, and you're changing it rapidly, you may need to break it up into services. But are you really that big an operation?

Wikipedia, fifth busiest site on the web, is MySQL front-ended by ngnix with the business logic mostly in PHP.


I understand where the post is coming from as some micro-services architectures are just a mess and I see startups trying to force it.

But the post misses a major point of why micro services architectures exist, to decouple dev efforts, I can hire someone off upwork or anywhere to write a small service in whatever framework is suitable for them, as long as the service connects to the message bus. I can scale services horizontally much easier when each uses it's own data model etc.

It's not just about the size of the company but also about scale, both of developers and servers.


We use SOA. I see both sides of this issue.

SOA has given us the ability to send a single part of our architecture to other people to work on (contractors, etc.) and let them understand that one part well to get their job done without too much exposure to the rest of our system. It's allowed us to also bring on new hires and get them working on code quickly and letting them deploy without total fear of the system imploding. It's also allowed us to become specialists in a small team on a certain area of the architecture. While we all write similarly we can all approach the issues in our own ways without stepping on each other.

On the other hand many things have been done which shouldn't have been. Some areas are way TOO decoupled via SOA and need to be brought back together for better stability and speed. We're learned that afterwards and only is becoming required after lots of complexity has been added which we didn't foresee because no one (programming & business) thought we would need it to be that complicated.

For some areas it'd a good idea if we merged together some repos and services in to a better single service with some stronger integration while maintaining the API's we built for other things.


It comes down to Conway's Law [0]:

> organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations

If you are a single team producing it, then build a monolith. If it is a diverse set of teams collaborating, then build microservices.

[0] https://en.wikipedia.org/wiki/Conway%27s_law


I think this only applies to people implementing their apps using SOA as pat of cargo cult thinking like was mentioned in the article.

I ran a company with 4 employees. 3 developers and an idiot sales guy. SOA made perfect sense from the start. We had daemons running background tasks on our servers in Go (the best tool for that job), a separate data API we used for our main "monolithic" web app, and then our mobile and other clients all used the data API.

The developers working on the web app only had to know the API end points to get data into the "monolith" and the rest of us working on the API and daemons understood how all the other clients would use them. No issue.

I'm all about the idea that you shouldn't implement an SOA because successful companies do it but I feel like this article is recommending building a monolithic rails app or something as a reaction to how popular and talked about SOA architecture has been lately and doesn't really leave much room for the idea that small companies (even really really small ones) can use it and would it would make sense for them.


The article is simply reinforcing the idea that there is an alternative, and monolithic applications are not some kind of "outdated technology". Both SOA and monolithic applications are useful in different circumstances.


In a monolith you can achieve code compression ratios (think DRY as a sort of entropy coding) not possible in micro-services.

Therefore, if you build it sensibly, you can achieve more with less resources. But this also requires you to do a time-memory trade-off in that it requires more effort for any individual programmer to learn all the abstractions to eventually become (surprisingly) productive.


I actually just started working on a new Node.js project that could get very large, and the first question I asked myself was... Do I want each npm package to be in a separate repo or use a single repo with deeply nested but discrete npm packages?

I decided to go with a single repo with lots of discrete NPM packages using NPM private. This allows for things like:

    const logger = require('@org/utils.logger');
Which to me reads beautifully. The real only gotcha is remembering to run npm publish and npm install often. There is quite a bit of duplicate packages installed though. For example, almost all packages I write require lodash, so it is installed n number of times instead of just once globally at the root. But... I'm ok with that.

If each package was inside its own repo that increases complexity and costs (lots of private repos in GitHub) and I don't really see the benefit.


This should be re-titled as "Majestic Monolith RENDERING"

Reason being, it's only the rendering engine that is a monolithic basecamp app that renders both web and mobile.

But a variety of 3rd party services are used to make basecamp work, such as

- Queen Bee is an external rails app for billing

- CloudFront for CDN

- S3 for customer doc storage

Etc

All of which use services (micro services) stitched together to make basecamp work


Team size isn’t the only factor in breaking a monolith into services. In my experience it's been an effective strategy for removing bottlenecks in your application, as well as a way to isolate a piece of code that causes errors and causes the application to crash.


You can also introduce bottlenecks this way. The most obvious of which is now you've introduced a socket boundary and very likely at least one network roundtrip for a piece of functionality that used to be local.

That said, one of the reasons I like using Erlang for building services is the structural primitive of the OTP Application. It makes it pretty easy to build something self-contained that exposes a function API but can be deployed local to your other processes on the same VM or be trivially moved out to a dedicated machine/cluster as a remote service with almost no change to the programming model for interacting with it.

Deciding where a piece of functionality should live is almost purely a question of how much latency you're willing to tolerate. Though to be fair, this is because the OTP Application construct gives an incentive to not build a true monolith from the outset, despite the fact that everything that makes up your whole system might be in one big release tarball.


I have seen a number of failing microservices projects. I think that the key problem (for small teams) is that it makes it very difficult to test and onboard new devs. I've written about it recently on my blog as well: https://lasernotes.com/blogs/better-code/2016/01/13/monolith...


A bit OT but if microservices communicate over http, doesn't that mean a lot of overhead due to handshakes etc? Compared to just calling another method in the same code base it seems really wasteful and slow.


HTTP can be fast-enough, but it is also a poor choice to communication between microservices. Many people learn about REST and just assume you must use HTTP. It's an awful protocol with 2 decades of technical debt.


So what is best practice?


One interesting case for microservices: distributed teams. Less coupled systems are easier to develop in less coupled organizations.


SOA is not the antonym of 'monolithic'. The author is tilting at a straw man of his own devising.


This strikes me as false wisdom- in software it is a mistake to look at a large job that needs to be done, and insist that that job is only the purview of megacorps like Amazon or Google.

The company I work for has about 200 people and our system is at large scale, many thousands of events per second, so a microservice architecture is required. (It started out as a monolith actually, and grew out of that pattern when it didn't fit anymore.) But most people haven't heard of us.

I say you should be careful when you read stuff like this. It makes so much sense! Which is why you need to be careful of course.


> our system is at large scale, many thousands of events per second, so a microservice architecture is required.

I'm not sure if you're making a performance argument, or an organizational one. Microservices don't make things faster though. Introducing IPC where there wasn't any previously is not going to make something faster (it's going to be significantly slower 99% of the time actually) unless you also increase resources dramatically.


> It started out as a monolith actually, and grew out of that pattern when it didn't fit anymore.

Pretty sure thats exactly what DHH (the author) advocates for.


Well, I think an engineering team of 200 (or even 50 if 200 is your entire company) starts to exhibit some of the characteristics of a "large company". I've seen very small engineering teams succeed with microservices, but I'd say the sweet spot for moving to microservices is right between the 37 Signals "12 engineers" and your larger team.


"has about 200" that easily qualifies you as large org. if most of that 200 are working on the same product.


No, even a midsize company has a headcount of thousands. 200 headcount puts us solidly in small company territory.


For development purposes, it seems like head-count per-project is a more useful metric?

My impression from a past life was that even large companies didn't frequently staff hundreds of developers on the same project. Even if they might have thousands of developers overall.


200 developers is fairly decent size team for a single product. There are many teams at FB or Google that are smaller then 200 devs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: