Hacker News new | past | comments | ask | show | jobs | submit login

Even good architects with good ideas about modularity will fail writing monoliths, because that whole approach to software is intrinsically antithetical to decoupling and modularity. It’s like asking a professional soccer player to play soccer on the bottom of a full swimming pool. Doesn’t matter how good they are because the ambient circumstances render the task untenable. It’s the same for good engineers asked to work in monolith / monorepo circumstances. Through outrageous overhead costs in terms of tooling and inflexibility, the best you can hope for is stabilizing a monster of coupled, indecipherable complexity, like in the case of Google’s codebase, and even that minimal level of organization is only accessible by throwing huge sums of money and tens of thousands of highly expensive engineers at it.



It is relatively easy to have teams writing modular code.

They just have to learn how to actually use and create libraries on their language of choice.

Each microservice is a plain dll/so/lib/jar/... maintained by a separate team.

No access to code from other teams, other than the produced library.

It isn't that hard to achieve.


Your comment makes it clear to me that you don’t understand microservices. The challenge is not in the organization of simple compilation or code units that produce libraries, not at all.

The challenge is that in reality you will always need distinct build tooling, distinct CI logic, distinct deployment tooling, distinct runtime environments & resources, etc., for almost all distinct services, as well as super easy support to add new services that rely on previously never used resources / languages / runtimes / whatever. This need happens whether you choose a monolith approach or microservice approach, but only the microservice approach can efficiently cope with it.

The monorepo/monolith approach can go one of two ways, both entirely untenable in average case scenarios: (a) extreme dictatorship mandates to enforce all the same tooling, languages and runtime possibilities for all services, or (b) an inordinate amount of tooling and overhead and huge support staff to facilitate flexibility in the monorepo / monolith services.

(a) fails immediately because you can’t innovate and end up with some horrible legacy system that can’t update to modern tooling or accomodate experimental, isolated new services to discover how to shift to new tooling or new capabilities. This does not happen with microservices, not even when they are implemented poorly.

(b) only works if you’re prepared to throw huge resources and headcount at the problem, which usually fails in most big orgs like banks, telcos, etc., and had only succeeded in super rare outlier cases like Google in the wild.


I have developed projects with SUN/RPC, PVM, CORBA, DCOM/MTS, EJB, Web Services, SOA, REST.

So I think I do have some experience regarding distributed computing.

And the best lesson is that I don't want to debug a problem in production in such systems full of spaghetti network calls, with possible network splits, network outage,...


Your comment about debugging is much, much more applicable to monolith services than microservices. Digging into the bowels of a monolith service to trace the path of a service call is brutal, while even for spaghetti code microservices you can rely on the hard boundary between services (even when the boundaries were drawn poorly or correspond to the wrong abstractions) as a definitive type of binary search, as well as a much more natural and composable boundary for automatically mocking calls in tests or during debugging when isolating in which component there is a problem.


With a modular monolith I need one debugger, probably something like trace points as well.

With microservices I need one debugger instance per microservice taking part on the request chain, or the vain hope that the developers actually remembered to log information that actually matters.


If I worked with you, I would give negative feedback regarding your approach to debugging. You don’t appear to be taking steps to isolate the problem, rather just lazily stepping through a debugger expecting it will magically reveal when a problem state has been entered.

In the monolith case, your debugger is likely to step into very low-level procedures defined far away in the source code, with no surrounding context to understand why or to know if sections of code can be categorically removed from the debugging because, as separated sub-components, they could be logically ruled out.

Instead you’ll have to set a watch point or something, run the whole system incredibly verbosely, trip the condition and then set a new watch point accordingly. Essentially doing serially what you could do in log(n) time with a moderately well-decoupled set of microservices.

You’d also have the added benefit that for sub-components you can logically rule out, you can mock them out in your debugging and inject specific test cases, skip slow-running processes, whatever, with the only mock system needed being a simple mock of an http/whatever request library. One simplistic type of mock works for all service boundaries.

To do the same in a monolith, you now have to write custom mocking components and custom logic to apply the mocks at the right places, coming close to doubling the amount of test / debugging tooling you need to write and maintain to achieve the same effect you can literally get for free with microservices (see e.g. requests-mock in Python).

And all this has nothing to do with whether the monolith is well-written or spaghetti code compared to the microservice implementation.


List of employers on my CV speaks for my approach to debugging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: