Hacker News new | past | comments | ask | show | jobs | submit login

> Microservices was always a solution to the organisational problem of getting more developers working on a system at once.

Hmm? You can do that simply with API boundaries inside a program. Linux kernel has huge development team that compiles a all binary.




Sure you can, and then you have a team that introduced a bug in common library that caused 100% CPU consumption on all cores, and tests did not caught it so somehow got into production.

In the monorepo world, your whole system is broken, it is all hands on board to try to fix this, everyone is stressed out.

In the microservice world, you have only one microservice which went down, so most teams don't have to worry.. in the worst case, they'll say: "Sorry, but we depend on service X and they are down.. blame them, nothing we can do". Sure, that team which introduced the bug is stressed, but the average company stress is much lower.

Having a successful monorepo requires organized, cohesive team with good communications -- or at least a team with a highly experienced people with veto power (this is the Linux kernel model). Unfortunately, a lot of real-life businesses do not have it.


> In the microservice world, you have only one microservice which went down, so most teams don't have to worry..

My experience has been that one service going down (or even running slowly) can lead to cascading failures where identifying the root cause is a slow painful process. I know that in a well designed system that doesn't happen, but that's the nature of all bugs, isn't it?


Well yes, you still need monitoring to be able to tell that the failure is external, but the key idea is that each team can do it separately, without having to get everyone's buy-in, or instrumenting every call in the system.

For example, we've had a batch processing system, with pretty relaxed latency requirements, and at some point we were asked to integrate with (internal) service X. The problem is, service X which would go down periodically. The solution was pretty simple: a simple centralized error logging service we already had, some asserts on results, and timeout on all HTTP calls. This works very well, for us at least. The service X still goes down every once in a while, but we can always detect that and explain to our (internal) customers that it is not our fault the system is down. Our customers were the ones who selected service X in the first place, so they are pretty understanding.

Is it a desirable situation to be in? Nope, in the ideal case, someone would go to team behind service X and help them to make service X reliable, with proactive monitoring, good practices, more staffing, etc... But I work are in the big org, and each team has its own budget, management and priorities. So the microservices approach is the best we can do to still get the work done under such conditions.


> In the monorepo world, your whole system is broken, it is all hands on board to try to fix this, everyone is stressed out.

No, not at all. And the kernel is a single repo even.

> In the microservice world, you have only one microservice which went down, so most teams don't have to worry.... blame them, nothing we can do

Well how is that better than just switching the dependency back to last known version and ship instead being dependent on a whole different team just to get dependencies fixed _and_ running.

> Unfortunately, a lot of real-life businesses do not have it.

That may be true, however benefit of those properties are in different kind of development projects, and this is not the question of whether monolith or microservices IMHO. Also not mono repo or non-source binary distribution.


And microservices do NOT require an organized, cohesive team with good communication?

If anything, there is MORE communication.

And how many teams who do distributed systems really know what will happen if a critical service goes down? The system is down - same effect.


The communication is in terms of well defined and documented APIs. Which having micro service boundary strongly enforces.

> The system is down - same effect.

Yes if it's a service many other services depends on, but not so much if it's near the leaves of the service dependency tree. In which case the system may still be up with reduced functionality.


You can use microservices to decouple things like deployment schedules, build dependencies, configurations, etc.


You mean "duplicate"?


I think the Linux kernel is a special case. For starters, due to the nature of the kernel itself, a lot of components are very intertwined. You'll be hard pressed to find one area of the kernel core in which you can work without at least having an idea of how other parts work. But for other components, such as drivers, you'll find they're far more independent and different teams do work on them, and update them separately even in out-of-tree module builds, mimicking a bit how you'd do microservices in that limited setting.


So is every piece of large software a 'special case'? Besides the Linux Kernel, there's every other OS ever written, office tools, browsers, video games, all sorts of SaaS apps. The idea that microservices solve some problem in that space is bogus.


It is a special case because there's nothing underneath the kernel. Different pieces of the kernel usually cannot interact through anything that isn't the single binary, there aren't more levels of abstraction.

But office tools, browser, video games, SaaS apps can benefit from the architecture. Microservices, at least for me, doesn't mean "different processes running in containers on AWS", but separate "services" that can be run and deployed independently. One easy example is login components, say any of those tools offers the possibility to the user to login with one or more remote services. Instead of putting that code in the same monolith, you could have a separate binary that gets called by the parent and spits out an authorization token. The separate binary can be tested, developed and updated separately.

Of course, you can't apply microservices to everything and have it make sense, but the ideas of separate tools deployed and updated independently are not bogus. I mean, the Unix philosophy could be understood as a kind of microservices architecture, with different independent tools managed by separate teams.


> the ideas of separate tools deployed and updated independently are not bogus

No, but the idea that you should do that because you have a big team is bogus. There are good reasons to break some applications up into services, and this is the least compelling one.


I think that having an architecture matching your organization is not that much of a bad idea. If the organization consists of multiple independent teams with different managers, deadlines and priorities, what would be easier? An architecture with separate, isolated services that communicate over some established APIs and can be updated, tested and deployed independently, or a monolith where just to release a new version you have to get all the teams to agree on it?


IMHO you should still take it the other way round.

What Conway wrote: You can use the development of the software as one tool to learn about the social interactions in the organization.

Only doing microservices and then saying: Hey look, this matches our organization is offering very little overall.

So what if the different managers, their deadlines and priorities is the cause of many more problems not only the wrong decision to do micro-services (exemplary)?



But that requires discipline.


Discipline, leadership, organization, and, most of all: coherency between developers.

Most orgs do not have these things - especially not at the level of the Linux team. Also the motives of OSS are completely different than that of your typical business - I personally think this keeps OSS team coherent/productive for far longer than your typical corp can.


Mainly because for a very long time, Linus reviewed every pull request.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: