The sad thing about these "monolith vs microservice" debates is that to this day we have programming languages which favor shared mutable state, so a program written like this is an absolute hell (or a very leaky abstraction) to distribute. And it doesn't have to be like this.
Think about it. When your variable is a simple value, like a number or a string or a struct, we treat it as pass-by-copy (even if copy-on-write optimized), typically stack allocated. Remote IO is also pass-by-copy. But in-between those two levels, we have this intermediate pointer/handle hell of mutable shared state that the C family of languages promote, both procedural and OOP variety.
The original OOP definition is closer to the Actor model which has by-copy messages, but the actual OOP languages we use, like C++, Java, C# all derive philosophically from the way C handles entities on the heap, as this big shared local space you have immediate access to, and can pass around pointers to.
And that's where all our problems come from. This concept doesn't scale. Neither in terms of larger codebases. Nor in terms of distributing an application. It doesn't also scale cognitively, which the article mentions, but doesn't quite address in this context.
Something I’ve wanted for a while now is a language / framework that behaves like networked micro services but without the network overheads.
E.g.: the default hosting model might be to have all of the services in a single process with pass-by-copy messages. One could even have multiple instances of a service pinned to CPU cores, with hash-based load balancing so that L2 and L3 caches could be efficiently utilised.
The “next tier” could be a multi-process host with shared memory. E.g.: there could be permanent “queue” and “cache” services coupled to ephemeral Web and API services. That way, each “app” could be independently deployed and restarts wouldn’t blow away terabytes of built up cache / state. One could even have different programming languages!
Last but not least, scale out clusters ought to use RDMA instead of horrifically inefficient JSON-over-HTTPS.
Ideally, the exact same code ought to scale to all three hosting paradigms without a rewrite (but perhaps a recompile).
Some platforms almost-but-not-quite work this way, such as EJB hosts — they can short circuit networking for local calls. However they’re not truly polyglot as they don’t support non-JVM languages. Similarly Service Fabric has some local-host optimisations but they’re special cases. Kubernetes is polyglot but doesn’t use shared memory and has no single-process mode.
Yes, same feeling I had, but with dual mode; can be compiled as a 'standalone' service over HTTP/REST (or whatever) and _also_ compiled as a classic module with a strictly defined interface.
One cool thing about standalone services which needs to be factored in is that they can be spun up and debugged very easily. But for deployment, we pay for all the network latency/marshaling overhead, and coordination complexity.
So, best of both worlds? As for polyglot, there does have to be a shared platform (C ABI, JVM, etc). (Go doesn't play so nicely with other languages due to goroutine stack allocation.)
SCA did this back in the J2EE/SOAP days. An SCA interface was just the interface but the boundary itself could be implemented either as an in-process plain Java call, a cross-EJB call, or a SOAP call, so that in theory one could be swapped out for the other. In practice IME, it never was but maybe I just never came across the right use cases.
From your description I thought you might just want a bunch of singletons calling each others methods ("passing messages"), and to get the "by-copy" you could serialize and deserialize everything, or write proper copy constructors. Do I understand correctly?
The best way to do this is message passing. My current way of doing it is using Aeron[0] + SBE[1] to pass messages very efficiently between "services" - you can then configure it to either be using local shared memory (/dev/shm) or to replicate the log buffer over the network to another machine.
I'm working on a language that's like it. But my esoteric stuff aside, the closest production system we have like that is Erlang (and Elixir, if it's your thing).
It is because that's literally an architectural choice which prevents you from easily moving out a module from your "monolith" to another machine on the network, and causes the bugs.
The language & memory architecture... are an architecture matter.
Yes, but not necessarily: with rust you can put stuff on the heap yet it's still ownership-checked just like the stack is scoped in other languages. Conceptually I mean.
Think about it. When your variable is a simple value, like a number or a string or a struct, we treat it as pass-by-copy (even if copy-on-write optimized), typically stack allocated. Remote IO is also pass-by-copy. But in-between those two levels, we have this intermediate pointer/handle hell of mutable shared state that the C family of languages promote, both procedural and OOP variety.
The original OOP definition is closer to the Actor model which has by-copy messages, but the actual OOP languages we use, like C++, Java, C# all derive philosophically from the way C handles entities on the heap, as this big shared local space you have immediate access to, and can pass around pointers to.
And that's where all our problems come from. This concept doesn't scale. Neither in terms of larger codebases. Nor in terms of distributing an application. It doesn't also scale cognitively, which the article mentions, but doesn't quite address in this context.