In cases where there's a high degree of churn (i.e. early-stage startups) in shared libraries, updating those libraries can cause a large amount of busywork and ceremony.
If you had a `foo()` function shared between the GUI and the server (or two services on your backend, or whatever), in a monorepo your workflow is:
- Update foo()
- Merge to master
- Deploy
In a polyrepo where foo() is defined in a versioned, shared library your workflow is now:
- Update foo()
- Merge to shared library master
- Publish shared library
- Rev the version of shared library on the client
- Merge to master
- Deploy client
- Rev the version of shared library on the server
- Merge to master
- Deploy server
This problem gets even more compounded when your dependencies start to get more than one level deep.
I recently dealt with an incredibly minor bug (1 quick code change), that still required 14 separate PRs to get fully out to production in order to cover all of our dependencies. That's a lot of busywork to contend with.
It seems to me that the real problem is your toolchain.
In a previous project the workflow was like this:
Update foo()
Merge to master
Publish shared library
Deploy
So as you can see the only step added was to publish the shared library that would automatically update the version in all the projects using it.
If you are really doing everything manually I can understand that this is a pain, but this has nothing to do with the monorepo / multiple repo distinction, this is a tooling problem.
But you've just invented a sharded monorepo, and now have all the monorepo problems without the solutions.
What if updating foo() breaks something in one of the clients (say due to reliance on something not specified). Then you didn't catch that issue by running client's tests, now client is broken, and they don't necessarily know why. They know the most recent version of shared broke them, but then they have to say "you broke me" or now one of the teams needs to investigate and possibly needs to bisect across all changes in the version bump under their tests to find the breakage.
How is that handled?
(the broader point here is that monorepo or multirepo is an interface, not an implementation, its all a tooling problem. There are features you want your repo to support. Do you invest that tooling in scaling a single repo or in coordinating multiple ones? Maybe I should write that blog post).
If you had a `foo()` function shared between the GUI and the server (or two services on your backend, or whatever), in a monorepo your workflow is:
In a polyrepo where foo() is defined in a versioned, shared library your workflow is now: This problem gets even more compounded when your dependencies start to get more than one level deep.I recently dealt with an incredibly minor bug (1 quick code change), that still required 14 separate PRs to get fully out to production in order to cover all of our dependencies. That's a lot of busywork to contend with.