Hacker News new | past | comments | ask | show | jobs | submit login

Hahaha. I have been saying the same for years but have either been punished by my managers or mercilessly downvoted on HN.

Somehow mentioning that microservices might not be perfect solution in every case triggers a lot of people.

I have actually helped save at least one project in a huge bank which got rolled from 140 services into one. The team got also scaled down to third of its size but was able to work on this monolithic application way more efficiently. Also reliability, which was huge issue before the change, improved dramatically.

When I joined, I have observed 6 consecutive failed deployments. Each took entire week to prepare and entire weekend to execute (with something like 40 people on bridge call).

When I left I have observed 50 consecutive successful deployments, each requiring 1h to prepare (basically meeting to discuss and approve the change) and 2h of a single engineer to prepare and execute using automation.

Most projects absolutely don't need microservices.

Breaking anything apart brings inefficiencies of having to manage multiple things. Your people now spend time managing applications rather than writing actual business logic. You have to have really mature process to bring those inefficiencies down.

If you want to "do microservices" you have to precisely know what kind of benefits you are after. Because the benefits better be higher than the costs or you are just sabotaging your project.

There are actually ways to manage huge monolithic application that don't require each team to have their own repository, ci/cd, binary, etc.

How do you think things like Excel or PhotoShop have been developed? It is certainly too large for a single team to handle.




I think my biggest gripe with orgs that adopt microservices is they don't build out any of the testing, CI/CD, monitoring and debugging workflows to support it. It goes from shitty, slow monolithic application that super-pro can debug in a few minutes to.. Slow, shitty disparate services that are managed by different teams who don't share anything, suddenly you've got a Cuckoo's Egg situation where 1 guy needs to get access to all the things to find out what the fuck is happening. Or you just accept it's shitty and slow, and pay a consultancy to rebuild it in New Thing 2.0 in 8 years when accounting forget about the last 3 rebuilds.


That is EXACTLY what I have observed, time and time again.

If you have trouble managing ONE application what makes you think you will be better at managing multiple?

Also, running distributed system is way more complicated than having all logic in a single application. Ideally you want to delay switching to distributed system until it is inevitable that you are not going to be able to fulfill the demand using monolithic service.

If your application has problems, don't move to microservices. Remove all unnecessary tasks and let your team focus on solving the problems first and then automate all your development, testing, deployment and maintenance processes.

Or call me and I can help you diagnose and plan:)


Monoliths are also distributed systems and will run on multiple hosts, most probably co-ordinating on some sort of state (and that state management will need to take care of concurrency, consistency). Some hosts will go down. Service traffic will increase.

I understand your point. You are using distributed in the sense of "how is one big work distributed", you probably also hate overly "Object Oriented code" for similar reasons.

But distributed systems is a well understood thing in the industry. If I call you and you tell me this, then you're directly responsible for hurting how successful I would be by giving me a misleading sense of what a distributed systems is.


> But distributed systems is a well understood thing in the industry.

Wait, what?

Distributed systems are one of the most active areas on CS currently. That's the opposite of "well understood".

It's true that most systems people create are required to be distributed. But they are coordinated by a single database layer that satisfies approximately all the requirements. What remains is an atomic facade that developers can write as if their clients were the only one. There is a huge difference between that and a microservices architecture.


Distributed systems are well understood though. We have a lot of really useful theoretical primitives, and a bunch of examples of why implementing them is hard. It doesn't make the problems easier, but it's an area that as you say, has a ton of active research. Most engineers writing web applications aren't breaking new ground in distributed systems - they're using their judgement to choose among tradeoffs.


Well understood areas do not have a lot of active research. Research aims exactly to understand things better, and people always try to focus it on areas where there are many things to understand better.

Failure modes in distributed systems are understood reasonably well, but solving those failures is not, and the theoretical primitives are way far from universal at this point. (And yes, hard too, where "hard" means more "generalize badly" than hard to implement, as the later can be solved by reusing libraries.)

The problem is that once you distribute your data into microservices, the distance from well researched, solved ground and unexplored ground that even researchers don't dare go is extremely thin and many developers don't know how to tell the difference.


Correct. That doesn't make monolithic systems "not distributed".

Secondly, I don't know why you say "distributed systems are an active area of research" and use this as some sort of retort.

If I say "Is a monolithic app running on two separate hosts a distributed system or not", if your answer is "We don't know, it's an active area of research" or "It's not. Only microservices are distributed"


Hum... I don't think you understood what I said.

Most of what people call monolithic systems are indeed distributed. There are usually explicit requirements for them to be distributed, so it's not up to the developer.

But ACID databases provide an island of well understood behavior on the hostile area of distributed systems, and most of those programs can do with just an ACID database and no further communication. (Now, whether your database is really ACID is another can of worms.)


Different kinds of distributed systems have wildly different complexity in possible fun that the distributed nature can cause. If you have a replicated set of monoliths, you typically have fewer exciting modes of behaviour and failures.

Consider how many unique communciation graph edges and multi hop causal chains of effects you have you have in a typical microservice system vs having replicated copies of the monolith running, not to mention the several reimplementations or slightly varying versions and behaviours of same.


I don't even consider replicated set of monolyths as a distributed system.

If you've done your work correctly you get almost no distributed system problems. For example, you might be pinning your users to a particular app server or maybe you use Kafka and it is Kafka broker that decides which backend node gets which topic partition to process.

The only thing you need then is to properly talk to your database (app server talking to database is still distributed system!), use database transactions or maybe use optimistic locking.

The fun starts when you have your transaction spread over multiple services and sometimes more than one hop from the root of the transaction.


> Monoliths are also distributed systems and will run on multiple hosts

... not necessarily. Although the big SPOF monolith has gone out of fashion, do not underestimate the throughput possible from one single very fast server.


Well, no matter how fast a single server is, it can't keep getting faster.

You might shoot yourself in the foot by optimizing only for single servers because eventually you'll need horizontal scaling and it's better to think about it in the beginning of your architecture.


> eventually you'll need horizontal scaling

This is far from inevitable. There are tons of systems which never grow that much - not everyone works at a growth-oriented startup - or do so in ways which aren’t obvious when initially designing it. Given how easily you can get massive servers these days you can also buy yourself a lot of margin for one developer’s salary part time.


Whatever happened to premature optimization being bad?


Even in a contrived situation where you have a strict cache locality constraint for performance reasons or something, you'd still want to have at least a second host for failover. Now you have a distributed system and a service discovery problem!


So I actually find that microservices should actually help tremendously here? Service A starts throwing 500s. Service A has a bunch of well defined API calls it makes with known, and logged, requests and responses. These responses should be validated on the way in and produce 400s if they aren't well formed. Most 500s IMHO result from the validations not catching all corner cases, or not handling the downstream errors properly. But in general it should be relatively easy to track down why one, or a series of calls failed.

I also find that by having separate distinct services, it puts up a lot of friction to scope creep in that service and also avoids side effect problems- IE you made this call, and little did you know this updated state somewhere you completely didn't expect and now touching this area is considered off limits, or at least scary because it has tentacles in so many different places. Eventually this will absolutely happen IME. No of course not on your team, you are better than that, but eventually teams change, this is now handled by the offshore or other B/C team, or a tyrant manager takes over for a year or two before that is obsessed with hitting the date, hacks or not, etc...

But I guess an absolutely critical key to that is having a logging/monitoring/testing/tracing workflow built in. Frameworks can help, Hapi.js makes a lot of this stuff a core concept for example. This is table stakes to be doing "micro" services though and any team that doesn't realize that has no business going near them. Based on the comments here though ignorance around this for teams embracing microservices might be more common than I had imagined.


> So I actually find that microservices should actually help tremendously here? Service A starts throwing 500s. Service A has a bunch of well defined API calls it makes with known, and logged, requests and responses.

This isn’t wrong - although there is a reasonable concern about expanding interconnection problems – but I think there’s commonly a misattribution problem in these discussions: a team which can produce clean microservices by definition has a good handle on architecture, ownership, quality, business understanding, etc. and would almost certainly bring those same traits to bear successfully for a more monolithic architecture, too. A specific case of this is when people successfully replace an old system with a new one and credit new languages, methodology, etc. more than better understanding of the problem, which is usually the biggest single factor.

Fundamentally, I like microservices (scaling & security boundaries) but I think anything trendy encounters the silver bullet problem where people love the idea that there’s this one weird trick they can do rather than invest in the harder problems of culture, training, managing features versus technical debt, etc. Adopting microservices doesn’t mean anyone has to acknowledge that the way they were managing projects wasn’t effective.


I think this nails it. It's not the concept's fault if the implementation is half-assed.


> It's not the concept's fault if the implementation is half-assed.

worse than that. Case in point: a major US national bank interview for a developer position. they talk about moving toward microservices. a simple question since they mention microservices: will someone be able to access and modify data data underneath my service without going through my service? the uneasy answer: yes, it does and will happen.

that's the end of the interview as far as I am concerned.

If you can access and modify the underlying data store from my micro service, not only it isn't a micro service, it isn't much of a service oriented architecture. this isn't me being a purist, just being practical. If we need to coordinate with five different teams to change the internal implementation of my "microservice", what is the point of doing service oriented architecture? all the downside with zero upside?


Thats a distinction worth following up:

My take is there are 3 kinds of micro-service

* Service Ownership of data - if you want to change customer name, there is only one place to go.

* Worker services - they don't really own data, they process something - usually requesting from golden sources. Just worker bees but the thing they do (send out marketing emails to that persons name) is not done by anyone else

* Everything else is borked


sorry, that’s a distributed monolith. if everyone can mess with the data directly you might as well keep it in one service


LOL I'm in talks of migrating microservices into a monolith purely because there is so much overhead at managing it. You need at least a couple of people to keep it in place. And the latter means the company is prepared to kill the product even when there are actual customers plus new ones are coming.

Microservices make sense when you have millions of users and there is a need to quickly scale horizontally. Or when you have a zillion of developers which probably means that your product is huge. Or when you are building a global service from the get go and get funded by a VC.


I can understand Résumé-Driven Development. Our industry is famous for its "Must have (X+2) years of (X year old technology)" job requirements.


A sister team at work (same manager, separate sprints) split a reasonable component into three microservices. One with business logic, one which talks to an external service, and one which presents an interface to outside clients. They then came to own the external service as well. So now they have 4 micro services when one or two would have been plenty. I don't hesitate to poke fun when that inevitably brings them frustration. Another team we work with has their infrastructure split across a half-dozen or more microservices, and then they use those names in design discussions as though we should have any idea which sub component they're talking about. Microservices are a tool that fits some situations, but they can be used so irresponsibly.


> There are actually ways to manage huge monolithic application that don't require each team to have their own repository, ci/cd, binary, etc.

Would be interested to hear about some of these.


The basic technique is to use modular architecture.

You divide your application into separate problems each represented by one or more modules. You create API for these modules to talk to each other.

You also create some project-wide guidelines for application architecture, so that the modules coexist as good neighbors.

You then have separate teams responsible for one or more modules.

If your application is large enough you might consider building some additional internal framework, for example plugin mechanism.

For example, if your application is an imaginary banking system that takes care of users' accounts, transactions and products they have, you might have some base framework (which is flows of data in the application, events like pre/post date change, etc.) and then you might have different products developed to subscribe to those flows of data or events and act upon the rest of the system through internal APIs.


Perfectly described! People forget or don't know that there are many different architecture across monolithic systems, or that there are something in between monolith and microservice.


A monorepo I guess. It looks like Microsoft is using a monorepo for their office applications (https://rushjs.io/) and you could do the same thing for node.js using yarn workspaces/lerna/rush.

Each "Microservice" could live in a separate package which you can import and bundle into single executable.

Elixir has "Umbrella Projects": https://elixirschool.com/en/lessons/advanced/umbrella-projec...

Rust/Cargo has workspaces: https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html


Well for starters, each component's API can be published and versioned separately from its implementation.

The build of a component would only have access to the API's of the other components (and this can include not having knowledge of the container it runs in).

The implementation can then change rapidly, with the API that the other teams develop against moving more slowly.

Even so, code reviews can be critical. The things to look out for (and block if possible) are hidden or poorly defined parameters like database connections/transactions, thread local storage and general bag parameters.

In some languages dependency injection should be useful here. Unfortunately DI tools like Spring can actually expose the internals of components, introduce container based hidden parameters and usually end up being a versioned dependency of every component.


> I have been saying the same for years but have either been punished by my managers or mercilessly downvoted on HN.

Ex Amazon SDE here. I've been saying many times that Amazon tends to have the right granularity of services: roughly 1 or 2 services for a team.

A team that maintains the service in production autonomously and deploys updates without having to sync up with other teams.

[Disclaimer: I'm not talking about AWS]


I don't think you can downvote on HackerNews. That's reddit.


You can, you just need 501 karma first. Seems like someone wanted to prove you wrong since you've been downvoted.


Haha seems a few people wanted to prove me wrong :) Didn't know that!


You need 500 karma for the downvote button to appear: https://jacquesmattheij.com/the-unofficial-hn-faq/#karma




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: