Reading the responses/comments in here, a question arises...
Are things really that black and white as people here paint them - with monoliths unavoidably and necessarily becoming a tangled mess of spaghetti, and micro-services being the singular, only way to achieve clean separation, autonomy and partition?
With microservices inevitably being an organizational nightmare that leads to problems in inter-team coordination, while monoliths automatically ensure that everyone always is perfectly the same page?
Because by my personal experience, you totally can code a monolith in a strictly modular fashion, with clean interface-based API boundaries between the separate parts. And if you put those "modules" into separate packages, the dependencies between them end up being managed and documented as well. Just like you totally can setup microservices and surrounding procedures in a way that actually increases the feeling of product ownership in the teams, and reduces the hassle of inter-team coordination - especially in larger companies.
Obviously there are ways to do monoliths right, and spectacularly wrong. And there are ways to do microservices right and wrong...
I see so many arguments here, that do compare one method done wrong with the other method done right - and then imply that this is a strong reason for choosing that second method.
Perhaps it is easier to understand when you remember that service is not a technical term? People provide service.
In the macro economy, service comes from other companies. If you integrate an LLM into your application, you may use the services of OpenAI. If you integrate payment processing into your application, you may use the services of Stripe. Microservices are just like services, except offered within a micro economy (think a single business).
In other words, it's a team separation technique. You let teams within a business organize as if they were separate businesses and let them offer services to each other as if they were operating different businesses.
How those teams design their software is ultimately irrelevant. The key aspect is keeping active communication between teams to a minimum, using published documentation and API contracts as the mode of communication between teams, just as we do on the macro scale. Getting another team on the phone should be as hard as getting Google on the phone – i.e. basically impossible, but also basically unnecessary.
This is done in large organizations to ensure that developers aren't forever bogged down in meetings. If you have 10,000 developers all trying to work on the same project the communication overhead will kill you. Microservices is a way to try to overcome that. Of course, it serves no purpose in a small company where an economy can't reasonably grow anyway.
With no or minimal communication how is it possible to build anything new? In real world if you need to build something new and it requires 10 different companies to do that, it would be virtually impossible.
A lot of customer value comes from companies that assemble ideas that have existed in isolation. Customers think they hate that, but the numbers say they are wrong.
Foods are an early example. Most are rearranging the same dozen ingredients in different quantities and orders of operation.
Vertical integration is not an all or nothing proposition. Companies can make pretty unique solutions by specializing a couple of pieces.
I think the confusion is because "something new" isn't necessarily a completely new, disconnected product. It could be something like adding multi-lingual support to an existing product. In that case, you often do need to coordinate across many teams to get the new feature implemented, which can be a huge pain in a micro-service architecture.
No more than it is a pain in a service architecture – being the exactly same thing, just in a different economy. As always, if a service doesn't do what you need, you build your own in-house. Just because there is a product on the market is that is kind of, sort of, but not really, what you need doesn't mean you must use it.
> You let teams within a business organize as if they were separate businesses and let them offer services to each other as if they were operating different businesses.
> This is done in large organizations to ensure that developers aren't forever bogged down in meetings. If you have 10,000 developers all trying to work on the same project the communication overhead will kill you. Microservices is a way to try to overcome that.
Microservices do work better in large organisations, but I don't think the boundary should be defined by teams, because large orgs perform re-orgs more often than you'd expect.
It really should be defined by product. When a company is reorganised, then you'd be passing around products and product groups and not individual microservices, to the new teams.
I'm not sure a team is defined by individuals who may come and go. Look at professional sports teams that have been around for a long time. None of the people involved in the team today are the same people who were there in the beginning. The people don't define the team.
I expect you will find that service, product, and team all mean approximately the same thing in this context.
A lot of factors come together whenever the term Microservie or Monolith arises:
- Monolith has implicitly replaced the word "Legacy" system
- Microservices were (/are still) the cool thing to do in a conference driven industry. They were championed by big tech because they desire granular and flexible deployment systems (whereas in smaller products your code just needs to run and everything else is just overhead). Having multiple instances of a MS can fix some performance issues as well.
- People rarely talk about failing with their new (microservice) projects.
- Microservices can be a technical solutions for human problems. Large teams benefit from distributed deployments due to organizational reasons. They also force you to think more about modelling your responsibilities, which might be add odds with your company culture.
To me, Microservices are not an architecture pattern but a deployment strategy. Obviously, you want high cohesion and modules, which you might or might not get with Microservices. It comes down to understanding your problem domain.
If you have a modular monolith, you can still deploy it separately at a later point if the need arises. Even having a shared database is not a problem if your modules are only using the respective tables they own (this can be enforced with permissions).
> Because by my personal experience, you totally can code a monolith in a strictly modular fashion, with clean interface-based API boundaries between the separate parts.
This is my experience as well. In fact, that's exactly how we transformed a spaghetti mess monolith into something that was still a monolith, but clean and well abstracted. As things evolved, we even split some of the interfaces out into first-class web services that we hosted on different infrastructure. It was a near-trivial operation.
For me, adhering to Service Oriented Architecture has been the consistent them of keeping things manageable. Thinking about who "owns" data vs. who "uses" it, how data from these different domains is linked, and keeping things modular and logically distinct are the real keys. This applies on the whole spectrum from monolith to microservices, and makes for maintainable code where key functionality can easily be split out if needed.
I often have had the feeling that people tend to try to solve code-organisation issues with microservices. Sure you can code a tangled mess in a monolith, but adding TCP connections and Kubernetes to the problems you have probably won't make things better. Instead, rewriting some code to use whatever applicable design pattern probably will make it better.
An analogy that comes to mind is looking for a new house. Most people can't see through the current decoration and furniture of a house and imagine themselves living there in their own style. So real estate agents try to get rid of as many personal items and clutter.
It seems to me that many developers can't imagine how to refactor an existing messy monolith into a tidy monolith, but they can imagine an empty repository.
> Sure you can code a tangled mess in a monolith, but adding TCP connections and Kubernetes to the problems you have probably won't make things better.
Yes!
I call this "moving the spaghetti". That tangled mess in a monolith can easily be (and often is) replaced with an equally tangled mess in microservices. It's just that the mess will be in the connections between services rather than in the code itself.
I see the same thing in certain OOP camps -- removing the spaghetti code from the actual lines of code into the inheritance hierarchy instead.
It's much like sweeping the floor and leaving the dirt under the carpet. At first glance, it looks cleaner. But if you have to move the carpet, you see the truth.
My 2c: you can get things right, but most of the time you won't, for many reasons - technical, logistical, cultural or merely political. Sometimes you don't control these reasons.
So you are now left with managing risk. It's trade-offs all the way down.
> but most of the time you won't, for many reasons
This.
One thing that I noticed working in 3 different continents is that the learning/teaching on technology (CS?) is highly fragmented and it's quite hard to find some "common ground" in practices and methods and mesh all of that in a socio-technological place makes everything harder.
A weak generalization to illustrate the point: A [Continent A] developer will be more resource aware due to their natural lack of, an [Continent B] developer since it has more human capital available (better mentors, big ecosystem, huge amount of finance capital) will have more room for scaling ideas, a [Continent C] developer will have a better understanding of the intricacies due to their formal education and cultural aspects.
Placing all those people underneath a tech project and not having a program to terraform biases and level them to a shared and understandable set of expectations is the root cause of all this debacle.
I think you're right to be skeptical that microservices are required in order to stop code becoming spaghetti.
I think of it as all about what kind of boundaries you choose to put in your code and where. https://hivekit.io/blog/mesolithic-architecture/ The blog calls it 'mesolithic' architecture, but I almost wonder if 'gabion' architecture might be better - separate small rocks held together by a metal frame, forming a single architectural piece.
The principles of good architecture do not require you to make your boundaries network or process boundaries, but they do require that you have boundaries and that you're careful when and how you cross them.
Indeed, how these domains communicate is rather irrelevant. It can be HTTP, injected objects, closures or functions, a message bus, an event store.
I've worked on a few actually good Rails apps (out of hundreds of abysmal to just meh ones). Some just put all he bounded domains in libraries (gems, railties), some by pure message passing, one through sheer discipline to not ever add that has_many that would cross a domain boundary.
Microservices are just one, very specific way to erect and enforce bounded domains. But there are many more.
I guess, though, that in practice, "monolith" means "lack of any decoupling or domain boundaries". At least, to many it apparently does.
So then there's still a large group of architectures that aren't exactly microservices (http communication, isolated runtimes and processes) but aren't monoliths either.
"Single Process" - Can be module based (Shopify), where multiple module's interact, but must be combined for deployment still, these module's can even extend to the DB's. Can still have multiple instances for performance
"Distributed Monolith" - This is one to fear the most, we have multiple services, yet there is still shared DB's meaning we have to deploy things together.
Both are monoliths but one is less susceptible to ball of mud. It's a sliding scale, with trade offs for each (and many variations still in between those listed).
I think it's a lack of common terminology surrounding monolith / micro-service that creates this binary illusion.
In my opinion it's people problem. Lots of developers are focused on pure technicalities. They will focus on writing code and ignore why it's written.
As result of that their choice of architecture will be detached from business function. Also, as thinking about architecture isn't writing code it will be often afterthought based on shallow knowledge and superstitions.
Keeping that in mind - when you are talking to bigger team containing people with various seniority, it's often best to give them rigid set of rules, because they'll misinterpret or ignore something more complex.
And that's why black and white approach often works - it's far from ideal, but at least possible to actually execute in many of the teams.
The imho rational thing to do, would be to compare the pros and cons of each option and find a conclusion that way.
Comparing only the pros of one solution with only the cons of the other, seems a bit... disingenuous and an emotionally-driven line of argument.
And I personally just do not believe that making decisions with such wide-reaching implications as monolith or microservice based on emotional arguments, does lead to useful results?
The only thing that imho can come from this sort of arguing, is the kind of religious divide, evangelism and proselytism that ultimately only hurts both sides of the argument, because it makes people favor dogmatic choices over considering what's actually useful in the current situation?
>The only thing that imho can come from this sort of arguing, is the kind of religious divide, evangelism and proselytism that ultimately only hurts both sides of the argument, because it makes people favor dogmatic choices over considering what's actually useful in the current situation?
Of course :-) I'm not advocating for it in anyway. Quite contrary. That's just what I've seen - which sadly is, that tons of developers are not able to rationally discuss architecture and the just default to dogma.
Often someone dropped stone tables years ago and they just follow. If instructions they were given were simple and well understood on mechanical level, usually result is bit better than total chaos. Thus black&white approach.
The worst WTF garbage software I have seen in my 30+ years career is an old micro-services system maintained by a team at work.
It uses more than 100 micro-services to implement a system that would have been so much simpler if implemented as a monolith. With hundreds of scripts to try and manage the complexity. With frequent timing bugs (a single micro-service delay causing a cascade of timed out micro-services bringing the system down) and almost impossible to fix complex error scenarios that will make experienced developer heads spin.
In contrast, the systems I work on are million+ lines C/C++ monoliths that are orders of magnitudes simpler to maintain. With orders of magnitude fewer bugs in production. All implemented as a collection of well designed libraries.
IMHO the moment you split a monolith into 100+ micro-services, the number of failure scenarios dramatically increase because of the added complexity of message exchanges, message failures, queuing delays, circular dependencies etc.
In other words, if you don’t even have the skills to implement a monolith successfully, then stay the hell away from micro-services.
Monorepo, separate modules is the way to go. I just don’t think it occurs to people.
But I think for compiled languages people see the build steps as particularly wasteful, both locally and in CI. They forget about the integration test surface area or think it’s something to be paid for later (tech debt) and reason that if they are changing one module why should they have to pay to compile and test the whole thing?
A good tooling or ops person can ameliorate a lot of that. There are a lot of poor patterns that people ignore when build units are small and incremental, that you cannot ignore when they get large. And then there are tools that can auto start tests, which reduces the latency in code-build-test quite a bit.
If you don't get something, perhaps it's that this is a bit of a holy war, and as with all holy wars, each camp tends to argue things from a very black-and-white perspective. As with most things, though, the truth is not so binary.
I think most people agree with your post but you are indeed missing the point.
People are discussing anecdotal "average" cases. Not what is possible, but what actually happens.
Of course anything can and does happen, and outliers are interesting, but you shouldn't assume they are equivalent to other cases just because they are possible. The distribution does matter and since it's not easily measurable people discuss anecdotes and opinions.
The moment you adopt service based teams with service based managers, say goodbye to engineers caring about working product. Say hello to cross team meetings and project management every time you want to ship a feature.
It's pure vanity for a startup to think they will become the next AWS by adopting hard service-based contracts between teams.
This viewpoint doesn't translate to reality in my experience. Monolith or not, engineers care about working product if they are incentivized to care about working product (and if they aren't, they don't).
Code is code. Cognitive load is cognitive load. Doesn't matter how you organize it. Unless your company is still very small and simple, there's no single team that is going to be able to understand how the entire system works (and take ownership of every part of it working properly). I've worked in many monoliths where you still had to have "cross team meetings and project management every time you want to ship a feature".
> engineers care about working product if they are incentivized to care about working product
Places that set up the correct incentives are usually very rare. Maybe hedge funds or some finance places which give you a decent chunk of the profits your code makes but that's about it.
Every other place pays you the same "meh" salary and the quickest way up is to job-hop frequently, in which case resume-driven-development takes priority over "working product". That's how you get cargo-cult over-engineering, everything else be damned since you will be gone by the time the consequences emerge anyway.
(also you have "startups" where over-engineering is usually a desired feature as it gives them a justification to grift more VC money to keep their unsustainable business afloat further).
Honestly, in my experience there's nothing wrong with this line of thinking as long as you also consider that the product you're shipping to other devs is in service of a much more important and larger product/service in itself. Thinking of your service as a product keeps you thinking of the use cases, the potential errors and the DX of what you're shipping which is a good thing I say as long as you're not building pies in the sky. As with most things, it comes down to good planning
The key word here is DX. If a team is exposed to how the DX of their owned service or library enables others in the company to build things that bring smiles to customers' faces, they'll be able to better design for their downstream teams' experiences, finding the right balance of keeping API churn low while building for future flexibility. This is true whether the API boundary is over a network or over a function call.
That said, microservices have the added problem that if you don't get the culture and boundaries set just right from the beginning, it's harder to change. They're good for small domains where boundaries are defined, and many even-early-stage startups have some of those, but it's unlikely that the primary interface for customers is included in that.
The moment you adopt feature teams with feature managers, say goodbye to engineers caring about working infrastructure. Say hello to cross team meetings and project management every time you want to deal with anything platform related.
I appreciate you qualified as for startups, but worked with plenty of former startups/scaleups that were bogging down rapidly due to infra or scaling issues. Since they were feature factories, no team or PMs were measured or cared about overall system health. This includes "majestic monolith" cough stacks with Rails.
Adding some services were clear wins and needed for things like external APIs, Auth, reporting, etc. - but this is a far cry from AWS microservice hell for simple UI/CRUD features.
This doesn't ring true for me at all and I've dealt with the dreaded omni-rails stack (or worse the dreaded tomcat springboot stack). Monoliths are far far easier to scale because you have one throat to choke and performance patches / elimination of n+1's spread across the whole app. Also in a monolith unit tests have the power of integration tests in a microservice world. I can actually be sure the boundaries between components behave as I expect and fail loudly if they change in a manner that breaks me. And the last thing in the world I ever want to do for scaling purposes is stick a network call/serialization step between two pieces of my own app because that's a bottleneck waiting to happen.
It feels like a lot of people reach for microservices when they should be reaching for sharding and/or moving to reads to replicas (which you can finally do in Rails thank god).
My current job is working with around 20 other engineers on an extremely bloated and coupled monolith. I'd love to be able to separate myself and my team from others by an agreed-upon interface.
Yes, "agreed-upon" is certainly doing some heavy lifting there, but I don't think it's realistic to expect that
> cross team meetings and project management every time you want to ship a feature
> My current job is working with around 20 other engineers on an extremely bloated and coupled monolith. I'd love to be able to separate myself and my team from others by an agreed-upon interface.
> Yes, "agreed-upon" is certainly doing some heavy lifting there, but I don't think it's realistic to expect that
This is overly cynical to you likely because you haven't experienced maintaining micro-services in the long term. Being able to "break away" with a common interface is literally the microservice tag line. It's completely true too. Standing up a service w/ an interface is incredibly fast, rewarding and watching it hum is beautiful.
The problem starts when there's a bug that's upstream of your service. It's not too hard to get all 15+ services running on your laptop, but the problem is the other teams no longer let external members directly deploy the services they maintain (that one time someone from another team deployed a big bug). So now you've got to get PR approval and seemingly no one wants to review your bug. So you ask for time from the Product Owner and it gets forgotten because they're really busy too. So you go to an eng manager, etc etc...
The above scenario plays out SO many times. I've been there in all various forms. As a blunt statement - any team under 30 members, monolith.
If you have to "get all 15+ services running on your laptop", you've already lost. Every one of our 100+ components maintained by our dozen person team can be run and tested on its own. The only reason we have Docker on our machine is to test the images we deploy (just to make sure they build and run).
You have experienced what happens when web developers cosplay as software architects. That's when you get "microservices". It's a travesty, and it has ruined an entire generation of software developers who now believe that partitioning and autonomy are just myths.
So, as a blunt statement - your experience is typical. That is, it's average. That is, you experienced an average team doing average work misguided by average bad advice. It says absolutely nothing about software design. It only speaks to the fat part of the bell curve doing what it always does.
> So, as a blunt statement - your experience is typical. That is, it's average.
Definitely a fair judgement and would agree that this constitutes poor architecting on many many companies' parts including ones that I've been a part of.
> Every one of our 100+ components maintained by our dozen person team can be run and tested on its own.
Can you elaborate on this? What is the primary benefit of running 100-300 components vs a monolith specifically with a 12 person team?
More directly, what's wrong with a monolith at the dozen person team size?
> Can you elaborate on this? What is the primary benefit of running 100-300 components vs a monolith specifically with a 12 person team?
> More directly, what's wrong with a monolith at the dozen person team size?
Great question. It's honestly a hard question to answer because the real answer requires tacit knowledge, but I'll try anyway because you asked. I'll link a couple articles of longer form writing that may help to answer as well.
The benefit ultimately is productivity. That productivity comes from autonomy, which comes from partitioning.
Each component can be worked, tested, and deployed on its own. We never need to spin up more than one at a time, which means any time we are working on our code, we are typically working on 1/400th of it, and we know we are because we have 400 separate GitHub repositories. We can't accidentally get peanut butter in our mayonaise. Each part is independently built and tested.
When it's time to upgrade Rails, we can upgrade one application. This is 1/20th of our Rails code. That's a smaller batch. We know from basic flow principles that (see: Principles of Product Development Flow, Lean, etc.) that small batches are better for productivity.
We rarely have merge conflicts. We don't do pull requests. We branch when we need to. We are often working on a brand new, never been deployed project. We don't have our tests running in CI. Our tests run in less than a second in most projects on our laptops. With those that have UI, our tests typically run in less than 30 seconds which include thorough Capybara (UI Interaction) testing.
When we are working in any application, we only need to consider its direct efferents when making a change to any interfaces it exposes. We can trace those changes through easily when we need to make them. Because we practice software design, we don't often need to make large sweeping changes, but when we do, we can do them methodically without disruption.
The "worst" part is that some things are tedious. But we can automate tedious things. We can invent around tedious things. We used to have to deploy 15 web applications any time we had a style change. Now we deploy one application that supplies the CSS for all our applications via SSI (server-side-include). If we need to deploy 70 back end components, we can script that (with manual verification at each step). We can even spread that out across a couple team members and do it as an ensemble. It takes about an hour to deploy everything. We maybe do this once a month or so (when something highly afferent changes -- and guess what, we structure our design so that afferent things change as infrequently as possible).
Most things we work on we work until they are "complete". That is, they rarely change after that. We have components that have been in production for 3 years and haven't been substantially touched aside from Ruby upgrades and the like. They are still perfectly in control.
The list goes on and on.
Here are a few articles, feel free to poke around the others and ask any follow up questions.
Great that it works for you and your particular project.
Without knowing the details if your project and team, I’d take a monolith any day over 400 microrepos and the 70+ backend microservices. Our customers don’t care if it’s microservices or monoliths as long as it works and meets SLAs, so the question really is do I want to upgrade ruby or rails 400 times and deploy 100 services or just do it once in my monolith and deploy once?
Need to upgrade a gem that’s used in 100 of those repos to address a CVE - no thanks. Worse, 10 of those repos were owned by someone who left the company a year ago and since there were no PRs, no one knows how to work in the repo or understand what it does.
That isn't actually the question. I know it's not the question because we're capable of it, it is infrequent, and it takes on the order of single digit person hours a month for that type of concern for our particular project. Also, I didn't say 400 anything but repos. See some of my other comments for what those repos consist of.
> 10 of those repos were owned by someone who left the company a year ago
That's not how it works. Repos aren't "owned" by someone. Each repo stands on its own, is clearly testable on its own. Every one of our teammates is capable of updating every one of our projects because there is consistency between them.
If upgrading Rails or any gem scares you, you have other problems to deal with. For me personally (I'm relatively fast on this team) I can update Rails in all 20 of our Rails applications in maybe 30 minutes unless Rails caused a setback (this is somewhat frequent between propshaft and turbo). If Rails caused a setback, we have two dependencies (rails-application-operational and rails-application-development) that we can either include a patch or pin Rails. All told, it's a quick process.
You know what we don't have to spend time doing? Tracing callbacks across 30 entangled ActiveRecord models. Or anything else that comes from entanglement. We also don't need to entertain palliatives like packwerk and the like.
How do you know that? Hint: you don’t. You’re not the first to tell us what we do is impossible, and you won’t be the last.
By the way, I know how it sounds. And I know why a person would fear it. And, I know why they’d be incorrect in their assessment, because, unlike them, I have significant material experience in both camps.
This whole business has had few conceptual ideas that stand apart from fashion and stood the tests of time. A lot of people look down on C now for various reasons, but it's still extremely useful and going strong 50+ years later. Even though I'm older than C itself in the last year or two using it intensively, I found new insights in how to arrange things and what to avoid to make things more robust and maintainable.
It's not likely that hundreds of "microrepos" on github and even rails are The Last Word that people will eventually converge on and be using into the 2100s. It sounds like each repo is its own standalone thing with its own deps. There are advantages and disadvantages to maintaining this kind of thing, some of the disadvantages will only become clear as the deps evolve and mutate. Although you may feel you are in a local optimum with this architecture in 2023, ahead of the game, energized by your belief in it, you probably are not finished evolving your understanding even on the scale of one or five years.
One way or another almost everything in this business is ephemeral, disposable and will be disposed of in favour of something else.
Do you know what does stand the test of time? Design principles that are based on the immutable laws of physics. There's a reason that I can quote Plato to support the notion of partitioning. There's a reason that the basic principles are used in the design of every single significant thing in the world. There's a reason Linux has as many modules as it has (for them, a single monorepo works, but multiple works too -- see any library or application that isn't in the primary monorepo but is distributed via a package manager).
Also, no one said "microrepos". Don't put the current fad's fixation on what I am describing. I'm simply describing structural design. Does thing X and thing Y need common behavior? Is it actually generalized behavior? That's what we have called a "library" since we've been building non-trivial applications out of software.
Are there two disparate business processes that don't need to share states, and you know they don't need to share states because you've done the design work? Those are two separate components. Pub/Sub exists for collaboration between those components as necessary.
I'm not trying to say that our current technique for all things is the end-all-be-all. That would deny the continuous improvement mindset (i.e., the relentless pursuit of perfection) that got us where we are. But, there are things that won't change. Smaller batches will always be (generally) easier to manage than larger. Taking something that has high afferent coupling and adding specializations to it will always be a mistake (See Fat Model, Skinny Controller).
The reality is that there are software design mistakes that are knowably mistakes. They've been known for some time as well and can be measured. Those are the things that we avoid and those are the principles that have guided where we are.
I know this sounds highfalutin, but there is someone on our team that has been doing this for over 30 years (by "this", I mean continuous improvement/seeking perfection) and has studied under many of the greats in our industry. We aren't just making this stuff up.
Nobody is denying partitioning is helpful. It’s the implementation details of that partitioning that cause the debate. IMO using repos as a partition boundary is an abuse of git. And IMO invoking Plato to make a point is not helpful. Argumentum ad antiquitatem. This goes for the “30 years” comment as well: it’s entirely conceivable that they’ve been doing things inefficiently for 30 years. More likely that there is such a huge sunk cost that they are heavily invested in justifying it.
> IMO using repos as a partition boundary is an abuse of git.
How so? Why don't we put our Rails applications in the Rails repository?
> And IMO invoking Plato to make a point is not helpful. Argumentum ad antiquitatem
What I am pointing to is a common thread. Plato, Christopher Alexander, Edwards Deming, etc. What I am arguing for is anything but common tradition. It's counter to what is current "common knowledge".
> This goes for the “30 years” comment as well: it’s entirely conceivable that they’ve been doing things inefficiently for 30 years.
Yes, entirely possible. And if we find a better way to do things, we will be first in line.
> More likely that there is such a huge sunk cost that they are heavily invested in justifying it.
As someone who has worked in monoliths and microservices, monorepos and github galaxies, I think it would be better if we could converge on a monorepo managed by something like Bazel where you can just create a new build target that lists its dependencies, and can work directly with the dependency graph via its querying mechanism. Using repositories to encapsulate dependencies and build scripts I think is much too heavy of an abstraction and leads to poor discoverability. When I have to deal with that I usually wind up treating it like a monorepo anyways by creating automation to check out all repos and manage syncing them.
> When I have to deal with that I usually wind up treating it like a monorepo anyways by creating automation to check out all repos and manage syncing them.
Yes! Absolutely. With one key difference: You can't commit to multiple repos at the same time. You can certainly have them all checked out locally and search across them all, and we provide scripts to do that. Example from the Eventide project: https://github.com/eventide-project/contributor-assets
This is an open source project. Our project is based on many of the ideas and techniques. They've held up for 7 years or so.
Eh? 400 components without any integration testing?? Who glues the sum of the parts together into a whole? This can’t be a single application. Sounds more like many completely independent applications to me.
Who said we don't do integration testing? We likely don't do it in the way that you would imagine, because we don't need to. We also don't call it integration testing, we just call it testing. We have a test environment where we deploy to to do final inspection. It is a common occurrence that we deploy a cohesive set of new components for the first time and they all work together, because each component is tested in isolation including integration with its afferents. Each afferent (library or other component) exposes diagnostic substitutes and we use dependency injection. You can read more about that here: http://docs.eventide-project.org/user-guide/useful-objects.h...
Also, I never said 400 components -- we aren't there, yet. It's ~100 components. It's just 400 repositories. 100 of those are deploy projects (they simply contain scripts, a dockerfile, helm values, etc.) most of which is boilerplate.
Yes, thought so as well. "Instead of a single search and replace we get to a flow state and create an automated script to do it for all our repositories. Due to flow state productivity grains are immense. Of course we will manually check 400 times if the script did correct work, but we enjoy that each time."
Usually, when you put quotes around something, it's considered proper to actually be quoting someone instead of trying your best to make them sound like a clown.
But, why don't I go ahead and give you a real quote.
Of our 400+ repos, 100 or so of them are deploy projects. The only time they need to change in batch is if one of our boilerplate scripts changes. This is infrequent, but when it does happen, we have a "project-scripts" repository that contains our boilerplate scripts as well as scripts to update them across all of our repositories. This doesn't always require individual verification, but when it does, it's typically a matter of installing dependencies and starting the application.
Many text editors are capable of doing "single search and replace" across multiple repositories. All of our repos are in a single project directory, so we have a root, just like anyone with a monorepo does. Rarely does that "single search and replace" touch more than a small handful of projects. So, except in the extreme case, you are only running a fraction of the tests or verifications when you make one of these "search and replace" changes. Compare this to a monorepo, where, unless you have specialized tooling to slice your test suite while considering your package dependency tree, you end up needing to run all of your tests.
If you are responsible and your project is in control, you run the tests on your laptop first, and then (typically) you push your commit and it runs in CI. Actually, more typically, you push your commit to a branch, open a pull request, have it run in CI, have it reviewed, then merge it, where it runs in CI again. Each time it ran in CI, it ran your entire test suite (usually).
We don't do most of that. Most of what every team takes for granted, we just don't do. So, when someone imagines having to do their typical process for getting a change integrated 100 times, they are right to fear that. But, they are creating a false equivalence. We don't do that, because we've eliminated all of that waste from our process. What we do 100 times (again, the once or twice a month that we do it), it takes a fraction of the offort what most teams do once to get a single change in.
There's a reason that Toyota ate the west's lunch when it comes to manufacturing. They understand just-in-time (one-piece flow) and small batch sizes. They know how to address problems at their root and systematically eliminate waste.
Most in the software industry know how to do one thing: apply the thing the latest dev celebrity said in their latest tweet or blog post.
Here's the difference between me and that celebrity: I'll tell you straight up not to do what we do. I'll tell you straight up it will take you years with guidance to get anywhere close to what we do. I'm just telling you it's possible and we don't actually need to live in software shanty towns our whole lives.
Someone got to say it so I will. Most of your comments comes across like you've gone off the rails and started replacing competency, not to mention humility, with narcissism. Being so cocksure and backing it up with statements that you need to be as brilliant as you to "understand" is such a laughingly large red flag that I'm not surprised people interpret it as satire. I really hope your colleagues are onboard with this and that this isn't your own dogmatic crusade dragging them along.
I don't read cockiness or narcissism in his posts. Having just started reading Clean Architecture by Robert C. Martin two days ago (and hoping to complete it tonight, then planning on re-reading it again), it seems to me that hmeh has learned and PRACTICED what is discussed in that book. I wish I were in their shoes!
Clean Code (What I assume you are referring to with 'CC') is 10 years older than Clean Architecture. I haven't read Clean Code, I just picked up Clean Architecture, but I venture to guess some of his thoughts have changed since then. I also assume that Clean Architecture deals with higher level issues than Clean Code. I may have to pickup Clean Code to see what he said in that.
Thank you. I saw your other post, but it’s gone now. If you want to find me, I’m in the Eventide project Slack and on GitHub (you can find links to some of my posts here in this thread).
Let's just say that that sphere of the software dev community isn't exactly universally seen as competent work, more like the orthodoxy of ~15 years ago that's now increasingly, and in my opinion rightfully, being put into question for its bad complexity and performance trade-offs when applied beyond textbooks and conference talks. And this is being diplomatic compared to others.
I'm certainly not saying there isn't something to learn from them though, I've read a bunch of them, but after seeing countless wasted debugging hours and projects devolve into a complete mess of patterns where you have to ping-pong between files, functions, and patterns providing little to no value is astounding. The option to do non-clever code is seemingly insulting to the self-professed architect, but it is in my own experience at least, vastly more maintainable and productive, not because it results in "clean" code - it's usually pretty dirty and nothing to write home about, but because reading even a page-long flat procedural function that's just doing its thing, is much more straight-forward than ping-ponging between some clever architect's conglomeration of decaying design patterns.
Having the "meat" of the code available with a minimal amount of abstractions/layers also makes it much easier to adapt according to performance requirements without having to propagate changes throughout half the application due to it being entangled in pattern abstractions. (on that note, it sounds like hmeh is trying to work around this problem with abstractions by doing a significant amount artificial partitioning. But that would be solving a mess you've created yourself in the first place). Finally it will still be dumb & dirty code a couple years from now, no need to have all those design patterns internalized.
People promoting these things will of course then just blame it on the dev and call them average, or lacking in experience, because they can't identify and combine the various clever patterns correctly, they "are doing it wrong", and that's where I believe the above narcissistic tendencies I called out comes from.
I see a difference between "Clean Code" and code design patterns and "Clean Architecture". It seems to me that most think I'm speaking of Uncles Clean Code book/ideas when I'm thinking about his Clean Architecture book/ideas.
> The option to do non-clever code is seemingly insulting to the self-professed architect, but it is in my own experience at least, vastly more maintainable and productive, not because it results in "clean" code - it's usually pretty dirty and nothing to write home about, but because reading even a page-long flat procedural function that's just doing its thing, is much more straight-forward than ping-ponging between some clever architect's conglomeration of decaying design patterns.
With all due respect, if you think you are talking about anything that I was attempting to describe, you are not. Everything that we do is based on simplicity and building the most simple things that we can. We may have different definitions of simplicity however. For mine, check out Rich Hickey's "Simple Made Easy" talk if you haven't seen it.
We simplify things by de-complecting or, avoiding entanglement. Having separate repositories is just one way to re-enforce the simplification that we put in place with design. We call that structural design (because it's the design of the actual structure of the application).
> on that note, it sounds like hmeh is trying to work around this problem with abstractions by doing a significant amount artificial partitioning.
I can guarantee you that none of the partitioning is "artificial". I think at this point that you have projected so much of your experience into what I am saying that it's impossible for you to see anything I say in any other light. This, by the way, is what I mean by tacit knowledge. I say X, and you either hear Q, or you hear XIENRSTIENRSTIERNST, either way, you cannot possible understand what I am saying. This isn't a negative reflection on you, or a positive reflection on me. It's simply the way that tacit knowledge works, and it's why most "internet arguments" are wastes of time.
> People promoting these things will of course then just blame it on the dev and call them average, or lacking in experience, because they can't identify and combine the various clever patterns correctly, they "are doing it wrong", and that's where I believe the above narcissistic tendencies I called out comes from.
And yet, it is literally lack of experience (and ability) that prevents people from doing things they cannot yet do. That's how expertise works. Pointing that out is not narcissism. A gardener does not have the experience (or training) necessary to do brain surgery. This isn't a mark against the gardener. The difference in our industry is that it is filled with expert beginners. People who think that because they make 6-figure salaries after their limited education and experience they can now understand everything there is to understand about software design. This just isn't the case. If a person sees that as a slight to a fellow human, that says more about them than it does about the person pointing to that reality.
> Someone got to say it so I will. Most of your comments comes across like you've gone of the rails and started replacing competency, not to mention humility, with narcissism. Being so cocksure and backing it up with statements that you need to be as brilliant as you to "understand" is such a laughingly large red flag that I'm not surprised people interpret it as satire.
Where did I say that, exactly? I’ve said nothing about intellect. I’ve mentioned tacit knowledge multiple times, because I have enough experience to recognize how ludicrous a lot of this sounds. It sounded that way to me at first until I experienced it and saw the pieces fit together. That’s how tacit knowledge works, especially when it is counter to average knowledge. It’s the same reason American industry smashed Japanese cars in demonstration instead of listening to them when they said large batches were bad (I’m simplifying here somewhat)
And which part, specifically sounds incompetent? I’m happy to discuss that.
By the way, it's decidedly difficult to challenge a common belief, dare I say, orthodoxy, without sounding insane, confident, or even cocky. Could I be better at it? Of course, and I work at it, but I also am not here to make friends. I'm here to provide a perspective that I believe is sorely lacking from the development community and is drowned out by the orthodoxy, its acolytes, and its beneficiaries.
> I really hope your colleagues are onboard with this and that this isn't your own dogmatic crusade dragging them along.
Given their tenure on the team and their other options, it’s a pretty safe bet. It’s not my crusade, btw, it’s our teams goal to manage a relatively complex project with a relatively small team.
Also, someone has to say it, so I will. You have now dropped this conversation, which was about the work, into ad hominem and straw man attacks. That’s one of the worst part about this industry: people’s inability to debate and discuss without attacking people or glorifying celebrities.
> into ad hominem and straw man attacks. That’s one of the worst part about this industry: people’s inability to debate and discuss
Let's see here:
> You have experienced what happens when web developers cosplay as software architects.
> That is, you experienced an average team doing average work misguided by average bad advice. It says absolutely nothing about software design. It only speaks to the fat part of the bell curve doing what it always does.
> because, unlike them, I have significant material experience in both camps.
> It’s hard to imagine worse advice.
> But please, by all means, continue to spread disinformation and keep us in the dark ages.
> Just read the author’s bio. This is a person that appears to have zero software design experience writing an article telling you to ignore software design and just respect your team configuration. I call this Conway’s Confusion.
That sets a good friendly tone huh? If you can't take push back, don't be an arse to begin with.
> And which part, specifically sounds incompetent? I’m happy to discuss that.
Your arrogance precludes a fruitful discussion. But I believe that this needs to be called out, if nothing else to nudge other people to also do it when they see it, or to, albeit much less likely, nudge you towards eating some humble pie.
> You have experienced what happens when web developers cosplay as software architects.
I agree that this is overly snippy to the point of being counter-productive. It represents a particular emotional frustration with the state of our industry. We knew what ended up being microservices (web API servers making web API calls to other web API servers) was a failure mode. We knew you couldn't just introduce network hops and call it "architecture". So yes, I'm annoyed about that such that I'm willing to call out nameless "web developers", but I understand how that can create discomfort.
> That is, you experienced an average team doing average work misguided by average bad advice. It says absolutely nothing about software design. It only speaks to the fat part of the bell curve doing what it always does.
Indeed, and the person I was responding to effectively acknowledged this. The sooner we can recognize that the skill curve/technology adoption curve are real things, and crossing the chasm takes hard work, the sooner we can stop leaving people behind said chasm. It's not easy. Nothing I said here is untrue. If it is, please point it out.
> because, unlike them, I have significant material experience in both camps.
Again, I'm pointing to a material difference. Most people who are attempting to refute what I am saying have never done what I am saying. They are doing it from a position of fear, uncertainty, and doubt, or worse. I've been where they are. I've fought the fight they are fighting and, thankfully, I lost, and was introduced to new ways of doing things and seeing things. If a person has done what I'm discussing, I would expect them to say that and tell me why it failed. Instead I get people telling me it can't work and I'm incompetent, etc.
> It’s hard to imagine worse advice.
> Just read the author’s bio. This is a person that appears to have zero software design experience writing an article telling you to ignore software design and just respect your team configuration. I call this Conway’s Confusion.
Putting these together because they are about the OP article and the OP. "hard to imagine worse advice" is hyperbole, but it is bad advice. You can't force fit concepts to teams. I mean, you can try, but you'll always end up with unnaturalness.
The actual OP's bio talks about their career. None of it mentions software design. If a person with zero surgery experience started writing about how to properly set up operating rooms, you can darn well bet they will be called a charlatan and called to the carpet. If they aren't people may die. Software isn't that serious usually, but it's not hard to imagine that there have been billions upon billions of dollars flushed down the toilet for the sake of poor software design that no one speaks out against.
> But please, by all means, continue to spread disinformation and keep us in the dark ages.
Yes, overly cynical and unnecessary. I made my point prior to this and I didn't need to add this.
> That sets a good friendly tone huh? If you can't take push back, don't be an arse to begin with.
Some was unnecessary yes, thanks for calling it out. The rest represents what I think is healthy push-back against an orthodoxy that causes more harm than good. We can achieve more as an industry and make our way out of the realm of hobbyists and into something more serious. Many of us call ourselves engineers, but nothing we do resembles anything that people who are actually classically trained engineers do.
> > And which part, specifically sounds incompetent? I’m happy to discuss that.
> Your arrogance precludes a fruitful discussion. But I believe that this needs to be called out, if nothing else to nudge other people to also do it when they see it, or to, albeit much less likely, nudge you towards eating some humble pie.
Just to be clear: you said that what I was saying sounded incompetent. I asked you about that, and you're telling me I appear to be too arrogant for you to tell me why I sound incompetent? Please, tell me what I said that sounds incompetent. Let's move past the ad hominem portion of the discussion now and talk about the actual substance. I'll do my best to refrain from hyperbole and unnecessary snark.
Answered somewhat what I mean by at least the "code smells" and idiosyncrasies (competency was probably too harsh with given the amount of info) in a reply to another user.
> Your arrogance precludes a fruitful discussion.
This was specific to the tendencies in other replies to defer to your (or your team's) brilliance/experience as the answer to why your setup is not suspicious, but excellent. Not sure what can come out of a discussion when that's the answer to everything, well nice for you, but a setup that doesn't scale with employee count (and thus competency/experience/interest differences) isn't exactly something that you can go around bragging about.
> This was specific to the tendencies in other replies to defer to your (or your team's) brilliance/experience as the answer to why your setup is not suspicious, but excellent.
I touched on this in my reply to the other user, but at this point, I believe you may just not know what tacit knowledge is. You continue to think that I am pointing to our own brilliance. I'm not. I'm calling a spade a spade. I recognize what it takes to gain particular types of knowledge (tacit, or subtle knowledge), and I recognize that it's this reality that prevents most conversations about techniques from being fruitful.
Each participant will put their own experience behind the meaning of their words (and worse, their conversation partner's words) and it will prevent them from recognizing what one another are saying. The only way to have a fruitful discussion is for both sides to be capable of recognizing when that is happening. Since, in my experience, most people aren't -- they'd rather die on their hill than recognize that the person they are talking to is simply on the same hill but sees it differently, or they are on a different hill that really is better, but it cannot be seen as such yet, because it is over the horizon of their knowledge.
I don't actually know if you are interested in understanding this more or if you joined the conversation just to try to put me in my place, but if you are, here are some articles that may help:
Once you can recognize that there is subtle knowledge you might actually see my pointing to it as attempting to keep the conversation from devolving into exactly the type of thing that it tends to devolve into. Or not.
> I believe you may just not know what tacit knowledge is
Not sure where you got that from, haven't said anything about it nor did I include that in my quotes.
I'm well aware of it. But if I have one comment on it would be that I see it more as something an experienced person (or someone with a natural knack for it) makes use of under-the-hood, the resulting quality of the output however can usually be recognized by everyone, not something reserved for the "blessed ones". Take the redis source code for example (quite a few years since the last time I read it though). The author clearly has this skill, but one doesn't have to possess that to recognize the code quality (and btw, iirc without any mentions of a pile of design patterns/methodologies, just "doing it", but to each their own).
So therefore I'm a bit suspicious of anyone that claims that their idiosyncratic setup is actually simple if you just have experience enough to be able to judge it. I'm not saying that everything can be obvious at first glance but the vagueness triggers my radar after being worn down by experiencing way too much over-engineering motivated by self-serving vagueness and/or word salads ("baffle them with bullshit").
That said, if a code base keeps requiring you to make the correct design decision using subtle knowledge, that's a fragile situation and something seems off. You've probably already made the code base too complicated, my prejudices (somewhat confirmed by the language used in one the articles you linked [1]) would be through a pile of design patterns inspired abstractions that now everyone needs to be able to juggle at all times.
[1] "For example, the Tell, Don’t Ask principle can’t be expressed directly in terms of Push, Don’t Pull, which has a more common name: encapsulation. And each one of these qualities reflects cohesion and coupling. And furthermore, cohesion and coupling are inter-related and affect each other. Afference and efference are kinds of coupling. Afference is inbound coupling, and efference is outbound coupling"
> Not sure where you got that from, haven't said anything about it nor did I include that in my quotes.
I got it from here:
> This was specific to the tendencies in other replies to defer to your (or your team's) brilliance/experience ...
and
> Being so cocksure and backing it up with statements that you need to be as brilliant as you to "understand" is such a laughingly large red flag...
Given that at no point did I point to our "brilliance", I assumed you refererring to my pointing to tacit knowledge:
> It's honestly a hard question to answer because the real answer requires tacit knowledge...
> ... because much of what we do requires tacit knowledge to see the benefit of ...
So if I'm mistaken, I apologize, but please do point out where I claimed brilliance. I am certainly claiming experience, and expertise, but those are earned, as they are in any profession. It's also true that no one in this conversation (including you) other than me have actually seen our code, and therefore would be ill-equipped to judge it. That is, they do not have the experience of our code base. Rushes to judge it based on pre-conceived notions only reflect a lack of dilligence, integrity, and/or awareness.
> the resulting quality of the output however can usually be recognized by everyone...
Have you seen our code? Or have you seen a few mentions of some of the things that we do and you are using that to fabricate an idea of what our code is?
> So therefore I'm a bit suspicious of anyone that claims that their idiosyncratic setup is actually simple
You are of course free to be suspicious. A significant part of the experience necessary to judge it would be to actually see it. We are, at the end of the day, dancing about architecture. You cannot see our code, you cannot see our actual set up, which typically would mean that one would have essentially nothing to say about it. If you'd like to refute particular practices I mention, that's fine. But please, keep your presumptions to yourself, or at least ask them in the form of a question (e.g., "artificial partitioning").
> ...would be through a pile of design patterns inspired abstractions that now everyone needs to be able to juggle at all times.
We have relatively junior developers on our team. There are a small handful of common patterns that are used repeatedly throughout our code base. We strive to eliminate special (unnecessary) variatation so there are always exemplars and/or documented norms. We don't have 10 varieties of "service objects". We don't have callback hell. They do not struggle with these things. They struggle with other things, as they are relatively junior, but we support them. Again, frankly, you have absolutely no idea what you are talking about and you are continuing to attempt to make up for your ignorance (of our code base and what I am saying) with hubris and presumption.
I just saw this edit from you:
> That said, if a code base keeps requiring you to make the correct design decision using subtle knowledge, that's a fragile situation and something seems off.
You and I have two different ideas of expertise and software design and they are irreconciable. Maybe in 15 years you can look back on this conversation and recognize that there was always land beyond the horizon. Or maybe not. You are the doctor that is refusing to wash his hands before surgery because you still believe in bad humors. There are things you do not know, and you are stubbornly refusing to recognize them. You have probably worked on some pretty horrific code bases, and you can probably back up every one of the things you are projecting onto me with personal experience, but you have drawn the wrong ultimate conclusion. You have drawn the conclusion that software design does not exist and does not have consequences and that anyone who claims to do it is a charlatan. That may even be true much of the time (goodness knows that's what I'm saying about many people who claim to be doing software design). Judge me as harshly as you wish for making this assessment. Thankfully, we do not work together and we will be unlikely to cross paths again in the future.
> Given that at no point did I point to our "brilliance", I assumed you refererring to my pointing to tacit knowledge
I deduced it from your general tone. But you're probably right that the tacit was part of it.
> Have you seen our code?
No, but it was a general comment, hinted by providing the redis source code as an example.
> and presumption.
Yes, it was prejudicial, as stated. I hope I'm wrong.
> You and I have two different ideas of expertise and software design and they are irreconciable.
Agree.
> Maybe in 15 years you can look back on this conversation and recognize that there was always land beyond the horizon. Or maybe not. You are the doctor that is refusing to wash his hands before surgery because you still believe in bad humors. There are things you do not know, and you are stubbornly refusing to recognize them.
And here's why I said your arrogance would make it unfruitful. Because in the end you're of course objectively right, and I'm objectively wrong and will see my errors in due time. Sigh.
> Thankfully, we do not work together and we will be unlikely to cross paths again in the future.
Well, there is Poe's law. You can call my views and our team's culture extreme. We believe it is extreme from a statistical perspective. There's a reason that our team has the tenure that it has. What we are doing isn't out there typically.
But, I would challenge you to challenge anything that you saw as satire and try and wrap your head around how a sane, rational person could, with a straight face say "that's not a problem" and be absolutely right. Feel free to call something out specifically to me, and I will back it up. I'll warn you that you may still not agree with me, because much of what we do requires tacit knowledge to see the benefit of. And once you do, it's self-evident. A person must be studied in Lean and design principles and have seen the trajectory of several software projects over the course of many years.
See if you can understand what are the actual problems in software. What causes projects to fail and teams to have to hire 100s of developers to maintain a semblance of the productivity they had for their first year or two of development.
Try to record your setbacks in a daily work log (we all do this) and address root causes. Do this for 10-20 years and see where you end up.
Just to clarify, what are components referring to? Are they independent web services?
And what is the total size of the developers you worked with? Are the 12 just your team or the entire engineering org?
Edit: found from another comment
> We don’t call them microservices, because we aren’t web developers pretending to do architecture and recreating the mistakes of DCOM, CORBA, and Web Services. Most of them are autonomous event sourced components. The others are stand alone web applications that are stitched together with Nginx routing and SSI.
Components are either autonomous event sourced back end components (using Eventide) or independent web applications that are combined with Nginx routes and SSI. The UI is (mostly) server-rendered Rails. The users have no idea they are hitting 20 or so different web applications, but the developers sure feel the productivity boost of every application being a small application that is (relatively) independent from one another.
A dozen or so is the entire software development organization.
Edit: Indeed! If you have any other questions, ask away. Most people think what we do isn't actually possible, but it is.
Standing up a service across the globe is non-trivial even with world class tools. You still run into issues like services not being properly co located with other services they need to call so you end up paying for inter continental hops. And cold connections because the new service doesn't have much traffic yet. And maybe DB leaders are only in North America so Asia has terrible perf.
> I'd love to be able to separate myself and my team from others by an agreed-upon interface.
Don't doubt it, but that is what the grandparent was going to - you don't want to be held accountable for the success of the overall product, you want a defined contract that you maintain and the value-add of that is someone else's problem.
The dysfunction of a business is made up of individuals doing what makes the most sense for them locally. It isn't unreasonable to want microservices, it just happens that the incentives that make developers like microservices also tend towards systems that lack accountability for achieving system-level outcomes.
Isn't this precisely what leadership is for? If you've implemented an architecture that mirrors your org chart, then the interfaces aren't just in software, they exist between managers as well. System-level success depends on the managers and the directors above them to steer the boat in a way that places accountability on the teams and components of the architecture necessary to ship a feature, or whatever.
If your devs are just implementing whatever system design is coming from your tech leads, and having no opportunity to have an input or their own, you'll find you have very unhappy devs who don't hang around very long. That churn will make it impossible to build a decent system.
If your devs are just implementing whatever they want, with no strong design from your tech leads, and leaving no room for input from other teams, you'll find you have very unhappy customers who don't hang around very long... etc.
> I'd love to be able to separate myself and my team from others by an agreed-upon interface.
My experience suggests that speccing out such an interface well is a tremendous amount of work, and if you get it wrong (you most likely will) or if it needs to evolve it will generate lots of work since now it's set in stone and very hard to change so you're likely going to pay "interest" by working around its shortcomings for the foreseeable future.
In most cases there's going to be substantial overlap between the functionality required for various business functions, and even in differentiated systems the ideal technical domains may not map cleanly to business domains. It's very easy for this to result in multiple teams having half-assed, semi-compatible implementations of the same functionality.
Some places try to address this by creating dedicated "core" or "platform" teams, but there's numerous pitfalls this way as well. Platform teams and product teams necessarily have to justify their budgets in different ways. They have different stakeholders and internal sponsors. This can result in political struggles over headcount. "Brain drain" scenarios where product teams are perennially starved as their strongest ICs are moved to platform. "Slush fund" scenarios where platform teams are regularly "temporarily" reassigned to support feature development. I've seen them both.
The longer you can have your entire engineering team share responsibility for the entire product the better. I am convinced of this. Unless you can truly, completely decouple systems -- making them separate products with dedicated teams -- you run a high risk of having a codebase that is more complex that people still need to understand in its entirety to work effectively. The worst of both worlds.
>Unless you can truly, completely decouple systems -- making them separate products with dedicated teams -- you run a high risk of having a codebase that is more complex that people still need to understand in its entirety to work effectively. The worst of both worlds.
Exactly what I meant. And to make it happen you need to understand underlying business domain. Separate product doesn't have to be huge.
For a lot of tech startups the underlying business domain can be nebulous and change significantly over time, and then you end up with substantial organizational debt. Well-established firms can get away with (and benefit from) more differentiation and formal structure because they tend to be able to make longer-term plans.
It turns out that at a certain point, features start requiring integration in the system. Let's take a hypothetical SaaS that provides workflow management for a domain. Let's say there are about 125 engineers. Workflow can't be 60 engineers, so it's really 12 teams, given teams of 5. One takes the notifications engine and builds emails and slack notifications. They have an API for other teams, but this feature requires a new feature from the notifications system. That's a relatively light touchpoint, but two teams are involved in a dependent relationship.
Now let's say there's a partner integration team. This team provides an external API to ease integration with tools such as Salesforce. This team has a full backlog, and it's a complicated subsystem that doesn't do well as common code. (External APIs also require governance.) So, this feature requires work from the external API. To call this feature finished, this team needs to be able to input and export to other systems, but is not the primary feature of the application. So, you have a second team involved.
Third, you have an internal support team that handles tech support and setup for clients. This team now has work to do to onboard clients.
And there's the team that does the work in the workflow.
As such, this org could need four teams to get a feature done. Now, you could say that such a team could use common code ownership to get things done, but then you have 125 engineers that have to understand a series of systems and not step on each other, systems that they only work in occasionally and change by the time they need to get in again.
I've worked in a common code startup and it works... up to a point. It is definitely slower to have to onboard continuously in other code areas in which you don't have domain experience, even if the documentation is spectacular (and it better be!)
You just hand-waved their entire business as generic "workflow", picked up a few common technical issues (notifications, integration, support), and built teams around these issues. That indeed is going to hurt :-)
In team topologies, you'd consider them "complicated subsystems" teams. (See https://teamtopologies.com/key-concepts ) They may even be consider stream-aligned teams but they a feature crosses multiple streams to be valuable.
I used a workflow as a common type of SAAS business. My past and current companies could be considered such companies and had significant resources into such streams. Trying to enable the workflow of thousands of companies is a remarkably difficult problem.
But to take an example, look at monday.com as a workflow SAAS business that has a 9 billion dollar market cap. I would be surprised if they didn't have at least one team dedicated to tools for their support team.
The sad thing about these "monolith vs microservice" debates is that to this day we have programming languages which favor shared mutable state, so a program written like this is an absolute hell (or a very leaky abstraction) to distribute. And it doesn't have to be like this.
Think about it. When your variable is a simple value, like a number or a string or a struct, we treat it as pass-by-copy (even if copy-on-write optimized), typically stack allocated. Remote IO is also pass-by-copy. But in-between those two levels, we have this intermediate pointer/handle hell of mutable shared state that the C family of languages promote, both procedural and OOP variety.
The original OOP definition is closer to the Actor model which has by-copy messages, but the actual OOP languages we use, like C++, Java, C# all derive philosophically from the way C handles entities on the heap, as this big shared local space you have immediate access to, and can pass around pointers to.
And that's where all our problems come from. This concept doesn't scale. Neither in terms of larger codebases. Nor in terms of distributing an application. It doesn't also scale cognitively, which the article mentions, but doesn't quite address in this context.
Something I’ve wanted for a while now is a language / framework that behaves like networked micro services but without the network overheads.
E.g.: the default hosting model might be to have all of the services in a single process with pass-by-copy messages. One could even have multiple instances of a service pinned to CPU cores, with hash-based load balancing so that L2 and L3 caches could be efficiently utilised.
The “next tier” could be a multi-process host with shared memory. E.g.: there could be permanent “queue” and “cache” services coupled to ephemeral Web and API services. That way, each “app” could be independently deployed and restarts wouldn’t blow away terabytes of built up cache / state. One could even have different programming languages!
Last but not least, scale out clusters ought to use RDMA instead of horrifically inefficient JSON-over-HTTPS.
Ideally, the exact same code ought to scale to all three hosting paradigms without a rewrite (but perhaps a recompile).
Some platforms almost-but-not-quite work this way, such as EJB hosts — they can short circuit networking for local calls. However they’re not truly polyglot as they don’t support non-JVM languages. Similarly Service Fabric has some local-host optimisations but they’re special cases. Kubernetes is polyglot but doesn’t use shared memory and has no single-process mode.
Yes, same feeling I had, but with dual mode; can be compiled as a 'standalone' service over HTTP/REST (or whatever) and _also_ compiled as a classic module with a strictly defined interface.
One cool thing about standalone services which needs to be factored in is that they can be spun up and debugged very easily. But for deployment, we pay for all the network latency/marshaling overhead, and coordination complexity.
So, best of both worlds? As for polyglot, there does have to be a shared platform (C ABI, JVM, etc). (Go doesn't play so nicely with other languages due to goroutine stack allocation.)
SCA did this back in the J2EE/SOAP days. An SCA interface was just the interface but the boundary itself could be implemented either as an in-process plain Java call, a cross-EJB call, or a SOAP call, so that in theory one could be swapped out for the other. In practice IME, it never was but maybe I just never came across the right use cases.
From your description I thought you might just want a bunch of singletons calling each others methods ("passing messages"), and to get the "by-copy" you could serialize and deserialize everything, or write proper copy constructors. Do I understand correctly?
The best way to do this is message passing. My current way of doing it is using Aeron[0] + SBE[1] to pass messages very efficiently between "services" - you can then configure it to either be using local shared memory (/dev/shm) or to replicate the log buffer over the network to another machine.
I'm working on a language that's like it. But my esoteric stuff aside, the closest production system we have like that is Erlang (and Elixir, if it's your thing).
It is because that's literally an architectural choice which prevents you from easily moving out a module from your "monolith" to another machine on the network, and causes the bugs.
The language & memory architecture... are an architecture matter.
Yes, but not necessarily: with rust you can put stuff on the heap yet it's still ownership-checked just like the stack is scoped in other languages. Conceptually I mean.
Lets step back a second. If the rationale behind adopting microservices for everything is PURE orgitecture rather than software architecture, then that's not really a rationale at all.
Instead of having one instance, you now have dozens of little service fiefdoms plus all the added network I/O overhead associated with that.
The principled approach is really to simply not do that, for the basic latency costs. I mean, did World of Warcraft, which is way more impressive than 99% of all the little django apps out there in corpo world run on a gajillion microservices? Fuck no
If you follow the authors advice you would be tricked into believing monoliths like Photoshop, Excel or any AAA-game could only have been written by a single team only. That couldn’t be further from the truth. There are like 100 other ways to create interface boundaries than carving out microservices.
Microservices is a deployment strategy, not a tool you use to reinforce Conway law, a law that you actually want to defuse more often than follow, to not get your product stuck into your old org chart.
Maybe WoW didn't run on Microservices, but you can bet the project was divided into a bunch of different fiefdoms. Just they were separated at the interface level rather than the network level. Same concept though.
Ultimately if you've got different teams working simultaneously, they should be working on non-conflicting features. Microservice is just one architecture that develops within this constraint. You'll never get anywhere if everyone is working on the same interfaces with no separation of concerns. So where should the separation emerge, if not along the same lines as the existing IRL separation? Conway's law is not a coincidence; it's an inevitability...
Yeah but "non-conflicting features" can be at the level of an individual task. It doesn't have to be an entire component. The best projects I've worked on were ones where anyone was allowed (and encouraged!) to work on any part. It helps people learn, spread knowledge, prevents boredom, reduces the number of "critical" people.
The worst projects I've worked on (including my current one) had components where people "owned" them. They fight against outside input and treat "their" part as a little kingdom that they rule. It's worse in almost every way.
Obviously there are limits - at some point your project is big enough that you have to have some specialist individuals and teams, but it's wise to avoid that as much as possible.
This is what you get when all you talk about it the positives, especially to new programmers or students. It's not about the why, but only if you do or do not.
I'll die on this hill, but DDD can be put right up there with microservices as a solution to a certain type of problem being sold as THE solution.
I'm interested to hear more about your views on DDD - especially if you have examples where DDD has been actively harmful.
Usually my advice for anyone thinking about building a new piece of software for a particular business goal is to a) run an event storming workshop with a group of domain experts to help get a really good idea of events, actors, commands, information flow and clusters of behaviour, and then b) run a second pass where you think the domain through in terms of transactional boundaries and DDD aggregates, and then c) do a third pass where you think specifically about security constraints and how they can be met.
It's an expensive exercise in terms of time taken up front, but having completed it, hopefully the team have gained enough of an insightful understanding of the domain that they won't make silly hard-to-reverse mistakes like needing transactions that span service boundaries, or building demonstrably distinct domains which share similar concepts into uber objects spanning those domains, or having one service depend on information from multiple other services in order to apply required security constraints.
Anyway, TLDR is I have found DDD and DDD-adjacent methods extremely helpful for thinking through designs and making app architecture decisions.
Looking only from technical perspective, microservices are worse than monolith almost every way you can measure. The biggest benefit of microservice architecture is that it handles multiple teams better than modular monolith. This is good enough rationale if you want to have team independence and clear separation of responsibilities and you can accept all technical problems microservices bring. If you cannot accept such problems (like in case many desktop apps and at least in some part of WoW backend) you just need to select something that will work with your constraints - usually it is modular monolith architecture, sometimes something more exotic.
I used to complain about overly complex software, until I realized the problems themselves were very complex. There was/is no way around complexity, and pushing for early simplicity causes more problems than it solves. People need to accept that encoding 1,000 if else statements (software engineering) will be complex no matter how you spin it. Just design the software upfront for what you will need, like a professional. Technical debt more commonly comes from under abstraction rather than too much complexity/abstraction.
> Just design the software upfront for what you will need
That assumes that you know upfront what you will need and what software design will best meet the need. Sometimes that is the case, but many times it isn't. Different strategies are needed to deal with the latter case.
That assumes that you know the unknowns and how to design for the flexibility you will need. That won't always be the case. Sometimes you just have to accept the fact that you don't know enough up front to make a good design, so your initial efforts need to be focused at gaining the information you need, for example by prototyping to get user feedback knowing that you will probably throw some prototypes away, not making design decisions up front that you don't have the information to make.
We agree. I just phrased it as design and plan for project flexibility, not design a bunch of knobs strictly for every case you happen to know about at some snapshot in time.
I say design and not just plan because you'll want to mock out or prototype something to show and that does take design work to keep your iteration cadence high.
In my mind flexible means easy to iterate on but maybe that is not clear.
You have the terms essential vs accidental complexity for that.
Accidental complexity is something you should work hard to avoid. Trying to avoid essential complexity however is a fools errand and if you try, it will result in a deck of cards that only work under perfect conditions.
> Just design the software upfront for what you will need, like a professional.
This is impossible for anything more complex than the simplest of toy examples. Never have I seen a production system where all behaviors and use cases are known, well-defined and correctly documented ahead of (or even after the) implementation in enough detail. That’s just not how people operate.
It’s the equivalent of saying everybody should just come together and talk it out to achieve world peace.
i work in legacy code over abstraction gives more headaches than having to check manually when you need to change the software and the person who write it isn't in the company in the last five years, because the software is full of constrains that you don't known, and when you need to change something basic the whole software collapse(because of the interdependence of the componentes).
There's a balance to strike, right? Some complexity is just for complexity's sake, or out of inexperience or incompetence. But yeah, a lot of it is that the real world is complicated. That's convoluted code is solving for a bug an engineer was woken up at night for.
People just get burned by bad programmers. So they make up all these rules to safe guard against them. Its not abstraction thats the problem, its low quality engineers.
I think technical debt comes from not cleaning up/refactoring once you’re done. Also, doing this work requires to have a good understanding of the codebase as well as its style/cohesion.
I’m wondering if ML will get good enough to produce PR reviews that comment on style and refactoring opportunities to match the codebase’s spirit.
It’s hard to imagine worse advice. Software design principles lead you to good architecture. Focus on autonomy, proper partitioning, and sound design and you get what you get. If you target monoliths out of some misguided attempt to reduce cognitive load, you will only create unnecessary entanglement. If you try to target “microservices” with N services per team or other arbitrary target, you will end up missing boundaries you should realize or introducing ones you shouldn’t.
Just read the author’s bio. This is a person that appears to have zero software design experience writing an article telling you to ignore software design and just respect your team configuration. I call this Conway’s Confusion.
Source: I lead a team of a dozen or so and we have over 100 deployed components. We don’t call them microservices, because we aren’t web developers pretending to do architecture and recreating the mistakes of DCOM, CORBA, and Web Services. Most of them are autonomous event sourced components. The others are stand alone web applications that are stitched together with Nginx routing and SSI. The user has no idea they are interacting with 100 apps, but the devs sure do.
This is wrong. That's not how Conway's Law works. It’s also not how software design works. But please, by all means, continue to spread disinformation and keep us in the dark ages.
If you don't disagree, then I assume you have a strong understanding of structural design. I assume that you recognize that Conway's law is more of a curse, and a warning than it is something to embrace. I assume that you recognize that the only possible way to "embrace conway's law" and simultaneously recognize structural design would be to constantly be firing or otherwise disbanding entire teams as components get completed (components that likely won't need to be touched frequently because they were designed for a single reason to change). I assume all of that makes perfect sense to you. Yes?
No, not at all. I see why you think of it as a curse, I don't enjoy it as a resulta, but it's an observation of a pervasive pattern, not a design decision. It emerges from the way humans communicate and organize themselves.
You can either fight it agressivelly and try to minimize it, or you can "embrqce" it and try to adapt. I could force it into a disability analogy:
if you become suddenly deaf, you can agressivelly try to reverse it by all medical means possible, and you might succeed. Or you might not, then it would be better for you to accept the fact you are deaf and design your life around this disability as a means to adapt yourself to it.
That's how I see microsservices, too: accepting that large enough organizations cannot communicate as effectivelly and efficiently as small ones, therefore designing systems to minimize the impacts of these communication failures.
The difference, though, is that microservices was more about the focus on the micro. That was, small things. Not per-team things. It was about trying to break things down into the smallest possible piece (while simultaneously ignoring software design IMO). A single team could end up with 5 microservices. A single microservice may not need to be touched after it's deployed. It's true that this can facilitate multiple teams, and you say this when people emphasized the technology soup approach to microservices: finally, you can combine Go, Rust, Java, Ruby, TypeScript, and even COBOL!
By the way, the entire premise that you should organize your software based on the way teams are organized is the flawed thing. Natural designs emerge when designing software. If you must have multiple teams, it may make sense to distribute some of those partitions that emerge across those teams. Or it may not. What absolutely does not make sense is creating partitions to account for your multiple teams. That's design-by-making-stuff-up and it's as doomed to failure as the REST-based-microservices fad.
The single-most important lesson I've learned building products as a founding engineer in a successfully exited startup: weighing tradeoffs and deciding which software architecture to use is the wrong place to dedicate mental energy.
It's always the product. It comes first. Then the business. If you're lucky you may become a pawn in a larger battle among giants and you'll get acquired before you attempt to make any profit.
If you end up in a place where your chosen architecture is no longer capable of supporting your scale - that's a happy place that very few teams get to experience. It means you've survived.
Given that, whatever allows you to quickly get things in the hands of real customers (which depends quite heavily on what the actual product is) is the best architecture.
We've hired some experienced engineers from giga-corp FAANG etc into tiny startups. The transition is hard, because there the opposite is true. You have a business already, and you have a well-defined goal you need to achieve with multi-year roadmaps etc. There, yeah you should probably decide on architecture first.
Sad to see this isn't the top comment. Unless or until you have a built-in source of guaranteed demand - the only thing that matters is product agility in order to find customer demand. Without demand, you have no revenue, investors lose interest, the money runs out, and people stop paying you to work on that system. Then everything gets thrown out. And "demand" is not "we have paying customers" - it's having sufficient revenue, for whatever that means for that particular org.
I think we’ve been spoiled by move fast break things type of startups, and indeed if you’re in a competitive field you might have to fight for your survival.
But truly successful projects are the ones that are worked on and improved for decades. Long-term work means that you will have to be conscious about being clean, and care about constant refactoring and simplification, and design decision that will prevent paralysis down the line.
There’s a reason twitter did not change for like 10 years and apps like Instagram are able to adopt new trends super quickly. (Unrelated: since the Instagram team is behind Threads, I predict that Threads is going to evolve really really fast and be harsh competition to twitter over time)
I think a revisit of Conway's paper[1] might be appropriate. Between that, and the recent talk by Kevlin Henney about architecture[2], you'll be in far better shape to make such decisions.
“[D]esign the software to fit the maximum team cognitive load.”
I see. So, RAID drives, failover servers for redundancy, and generators (or at least UPS batteries) for power — but push your team to their maximum load.
Might be a bit of a wording thing, because I think I got it very differently than you.
To me, that sounds like "you can't design for a higher cognitive load than what your team can absorb" - and not "try to optimize the cognitive load dor the maximum your team can absorb".
I totally agree... the real question is how to reduce cognitive load. Try to get a system that can fit in your head, although I have totally failed to achieve this myself. But it is a goal!
Create self-contained frameworks and libraries by extracting code that starts becoming heavy and could be useful by its own. These are the useful abstractions.
Constantly seek to specify, document, and simplify the protocols you implement. The clearer the ideas, the cleaner the implementation.
Strive to use a single programming language.
Organize your code like you would organize a city or a book.
Cognitive load is one good dimension to think about here. I like that it is a human concern. There are probably multiple.
What I think is challenging is figuring out what the teams should be - this can be more complex than it appears. Do you have an "auth team" for example? How do you ensure the people in that team are happy that their CV is going to be "done auth for 2 years" when they next do their job.
For small companies you might have micro teams, where there are 0.1 members on that team - I.e. it is at the point of part of someone's role. But treating it like it's own team (it get's own repo(s)) might make sense.
Lots of architects / tech leads read this instead as: "design the software to fit the maximum team cognitive load that you desire”. They actually over-complicate software builds in order to justify headcount later. "Oh whoops yes we now need a whole full time Ops team, don't worry I know just the guys".
I feel like we are going to spend many years coping with Team Topologies, a book which idolizes a seeming infinite and vast independence of teams from each other in the name of 'spending things up' or some such.
I love, respect, & cherish the ideas here. But the sound-bite ideas of the book vastly overweigh the practical complexities of development.
Yeah, giving each team authority to do their thing is desperately necessary today; there's too much organizational confusion & unclear decision making processes. Teams need autonomy. Yes. But the book really seems to have so little to say about how to play together. It doesn't talk hardly at all about how to find concordance & to make decisions across teams. What are good common techs to adopt? Microservices as I'm everyone can pick whatever (Haskell for this, pho for that, & 12 varieties of node) is one organizational end, monolith is another end. The confusion and angst Team Topologies let's dwell and build is infinite, because it's pretense is that there are many parallel streams of development and that inter-team work is a negative.
The books is so good and so important. Because so many orgs are fucked and doing things terribly. Cognitive load is massively over managed and it's impossible to do anything to escape the tar pit the shitty ancient overly established pretentious shitbag elders have dictated. But the result of what Team Topologies says is such an opposite and shitty fucked, where cognitive loads expand exponentially because every team is independent & fucking off into their own space, with only vague constraining behavior or nebulous "platform" teams that "support" or maybe dictate to these platforms teams.
Never have I seen a book I both respect so highly & think so terrible & awful.
Mcroservices at the very least require very high investment into proper CI/CD and observability. If you are not willing to allocate resources to get those right things are going to go badly.
Most system can have a single server today and it will basically always be more simple than N smaller ones. Everything is just faster if you just skip the TCP layer. Less configuration, less attack surfaces, less possible problems and issues. The CI/CD and backup is a lot easier as well.
It is as another author wrote: It is a people problem. We programmers like the "do one thing" mantra so we tend to want to add it everywhere - even in places were the cost may outweigh the benefits.
Yeah I would think that the solution to monolith vs microservices would be “neither should be followed because of dogma; encapsulate where it makes sense, when it makes sense”.
I maintain that the only way to truly fight cognitive load is to outsource more of your design to long-term memory. That is as close to a silver bullet as you will ever get, and it explains way more of the tech landscape than you want to admit.
In the ideal case, there exists a literal piece of mathematics you translate into code and work with from there. The upfront cost is extreme - but, once the math is internalized, the resulting product is sleek and elegant and can be understood rapidly at a bird's eye view by other people who also understand the underlying mathematics.
Weaken this ideal case however appropriate to your business case, but no more than you have to!
You and I must work in very different contexts, as these questions are so obvious that they first seemed like satire to me.
You enforce API contracts in a monolith (or any codebase, really) via an at-least-modest amount of typing and a compiler. You diagnose performance issues via any number of tools, prominently including metrics and profilers.
My context for this is a lot of years working with backend languages like Java, Rust, etc. though the same assurances and tooling are available for most every platform I’m aware of.
I'm pretty sure one of the things that keeps me employed in software is that I'm good at non-software stuff. So much so that I focus quite a bit less on the software now, and a lot more on the things software does, why, and for whom. That stuff seems a lot more important in the scheme of things; especially when money isn't free and people need software to be truly useful and very immediately.
Maybe that's a typical progression in most software careers, but I wouldn't have believed I'd be here 10 years ago, or maybe even 5. I was always very technical (and I still love that side of things). Now I see the people side of things as far more important and interesting.
We might not be able to quantitatively measure it but we can run studies to evaluate what individuals and teams can handle. Human factors people do this and it’s a sub field in industrial engineering. The military runs these studies and I’d imagine air traffic controllers do them as well, etc. You sometimes get really surprising results.
That's something I've noticed: the way we treat systems developed to work on data is _completely_ different from systems developed to work on, idk, oil.
You can build data refineries (ETF) same as an oil refinery. The difference is, the engineers who build the oil refinery create manuals and standard operating procedures to operate the refinery, because if they don't, then the new board operator will press the wrong button and blow out every window in a five-mile radius.
When you build a data refinery, no one documents _anything_ no matter how many times you ask engineers on the team to do it. Will it blow up in a massive fireball if you do it wrong? No, but it will corrupt data and have a business consequence. You can keep the 40 different microservices for the data refinery in your head though, right?
Depending on what exactly it is that you're storing or processing, neither does data.
Think sensor data and the sensor is a vital signs monitor in a hospital. The service that reads its output and stores it glitches out due to some sort of misclick by a user. It distorts the fact that the patient has an arrhythmia. Or a service that reads off the medication dosages for a patient for a pharmacist is stuck on a single message.
Can anyone suggest a blog/book on how you manage cross communication across teams regarding evolving software interfaces? Change management foe Domain Driven Development? The way to minimise cognitive load is to have established amd clearly documented processes usually
If only technical reasons would be important when choosing architecture then microservice architecture would not exists. Its killer feature is team independence and strict separation of responsibilities. Any technical advantages are dwarfed by technical problems it introduces.
I suspect #points is the main factor. And the algorithm wants to surface fresh stuff, otherwise we'd still all be talking about Sam Altman. Glad we aren't :-) (nothing against Sam...)
IMO: write everything in Rust. If a dependency is not in Rust, rewrite in Rust.
Carmack has a rant about Meta’s stack for the Quest being in different languages and being a pain. Having worked in heavily-split stacks I don’t want to run into these again.
Are things really that black and white as people here paint them - with monoliths unavoidably and necessarily becoming a tangled mess of spaghetti, and micro-services being the singular, only way to achieve clean separation, autonomy and partition? With microservices inevitably being an organizational nightmare that leads to problems in inter-team coordination, while monoliths automatically ensure that everyone always is perfectly the same page?
Because by my personal experience, you totally can code a monolith in a strictly modular fashion, with clean interface-based API boundaries between the separate parts. And if you put those "modules" into separate packages, the dependencies between them end up being managed and documented as well. Just like you totally can setup microservices and surrounding procedures in a way that actually increases the feeling of product ownership in the teams, and reduces the hassle of inter-team coordination - especially in larger companies.
Obviously there are ways to do monoliths right, and spectacularly wrong. And there are ways to do microservices right and wrong...
I see so many arguments here, that do compare one method done wrong with the other method done right - and then imply that this is a strong reason for choosing that second method.
I don't get it?