> I described some of the benefits I am currently experiencing from using microservices where we previously had a monolith.
Sure - I do appreciate it. It's just that my experience in a company that used both was that the microservices weren't worth it - more generally I think a lot of people leap on microservices without thinking about other ways to achieve what they want - so I want to fully work through the options.
> Spinning up multiple components generally takes a few seconds once the image is built, as it is not much more involved than spinning up a new docker container of an image that is already pulled. Each image might take a couple of minutes to build, but because there are no hard dependencies between images (unlike libraries in a monolith) any amount of them can be building and deploying in parallel.
You can do a parallelized build for a monolith too - each component can build in parallel. (Assembling the final image obv. has to be done at the end, but it's a small part of the build time).
> It doesn't really matter how fast the "whole thing" can be built-and-deployed before, because in a monolith, you need every developer building and deploying their own library versions as well as the monolith. Meaning, not only do your changes to production have to wait in the queue for everyone else's, but you unintentionally pick up work from other folks along the way. That road leads down to the environment you describe, where every developer needs at some point to spin up their own dev instance and live on their own branch just to be able to test in isolation, resulting in an organization that requires merging, building, and deploying several times along the way in different environments before the change finally reaches production.
If your team structure is such that you need individually versioned components (which has its pluses and minuses), you can do that while still having a single final deployable.
> I also described a perfect use case of an HTTP interface (or RPC, or whatever language agnostic interface you want to use, it doesn't matter to me and rarely matters in the real world). Data science work is often in a different language space than web work. I'm not going to bundle native Python/R/Clojure dependencies with my node.js apps. It's too much work to maintain those bindings between runtime versions and app versions. I'm not going to force Data scientists to use JS because the library ecosystem is targeted toward completely different use cases. With HTTP or RPC or whatever transport, each team can use the best tools for the job.
I think it's worth mentioning a lot of people jump to HTTP without thinking, when actually something like Thrift is much better suited to an rpc-like use case.
I've actually come to think that the best languages are general-purpose languages and you should just use one language (Scala) for everything. I accept that that's a minority viewpoint though.
> I've actually come to think that the best languages are general-purpose languages and you should just use one language (Scala) for everything.
I disagree very strongly with this. My original background is in scientific programming, which I learned primarily during my PhD. Since then I've been working in, and now leading, teams comprised of both data scientists (principally from a scientific or mathematics background) and data engineers (primarily from a software engineering background). I can tell you from experience that trying to do everything in one language works extremely poorly in practice.
Scala is a great example. People who focus primarily on scaling out relatively simple data processing systems, perhaps with some machine learning thrown in, are big fans of scala. Any scientific programmer that I know (note that machine learning, the most popular kind of scientific programming on HN, is only one very tiny part of the field), including the data scientists in my team, simply cannot do their jobs with scala.
The typical patterns scala encourages are fundamentally unsuited to advanced scientific programming, which is why almost nobody in academic science uses scala [insert obligatory indignation about the wonders of functional programming here]. The number of scientific libraries available for scala is absolutely dwarfed by the C++/Python ecosystem.
Note that I'm not saying that scala is a bad language; I'm saying that every language has advantages and disadvantages for different kinds of work, and different mindsets.
I won't presume to accuse you of this, because I don't know your position, but this kind of viewpoint is typically held by the kind of programmer who dramatically underestimates the complexity of scientific programming.
Shrug. In a previous job I ported scientific code from R to Scala and 99% of the time it was very straightforward. I'm not saying that scientific code is easier than any other code (if anything the opposite is true), I'm saying that the things that make a good language for science are the same things that make a good language for anything else.
Sure - I do appreciate it. It's just that my experience in a company that used both was that the microservices weren't worth it - more generally I think a lot of people leap on microservices without thinking about other ways to achieve what they want - so I want to fully work through the options.
> Spinning up multiple components generally takes a few seconds once the image is built, as it is not much more involved than spinning up a new docker container of an image that is already pulled. Each image might take a couple of minutes to build, but because there are no hard dependencies between images (unlike libraries in a monolith) any amount of them can be building and deploying in parallel.
You can do a parallelized build for a monolith too - each component can build in parallel. (Assembling the final image obv. has to be done at the end, but it's a small part of the build time).
> It doesn't really matter how fast the "whole thing" can be built-and-deployed before, because in a monolith, you need every developer building and deploying their own library versions as well as the monolith. Meaning, not only do your changes to production have to wait in the queue for everyone else's, but you unintentionally pick up work from other folks along the way. That road leads down to the environment you describe, where every developer needs at some point to spin up their own dev instance and live on their own branch just to be able to test in isolation, resulting in an organization that requires merging, building, and deploying several times along the way in different environments before the change finally reaches production.
If your team structure is such that you need individually versioned components (which has its pluses and minuses), you can do that while still having a single final deployable.
> I also described a perfect use case of an HTTP interface (or RPC, or whatever language agnostic interface you want to use, it doesn't matter to me and rarely matters in the real world). Data science work is often in a different language space than web work. I'm not going to bundle native Python/R/Clojure dependencies with my node.js apps. It's too much work to maintain those bindings between runtime versions and app versions. I'm not going to force Data scientists to use JS because the library ecosystem is targeted toward completely different use cases. With HTTP or RPC or whatever transport, each team can use the best tools for the job.
I think it's worth mentioning a lot of people jump to HTTP without thinking, when actually something like Thrift is much better suited to an rpc-like use case.
I've actually come to think that the best languages are general-purpose languages and you should just use one language (Scala) for everything. I accept that that's a minority viewpoint though.