Hacker News new | past | comments | ask | show | jobs | submit login
Give Me Back My Monolith (craigkerstiens.com)
861 points by zdw on March 13, 2019 | hide | past | favorite | 411 comments



I don't think I blame the author at all. I'm not sure why you would start with microservices, unless you wanted to show that you could build a microservices application. Monoliths are quicker and easier to setup when you're talking about a small service in the first place.

It's when an organization grows and the software grows and the monolith starts to get unwieldy that it makes sense to go to microservices. It's then that the advantage of microservices both at the engineering and organizational level really helps.

A team of three engineers orchestrating 25 microservices sounds insane to me. A team of of thirty turning one monolith into 10 microservices and splitting into 10 teams of three, each responsible for maintaining one service, is the scenario you want for microservices.


We’ve done exactly this - turned a team of 15 engineers from managing one giant monolith to two teams managing about 10 or so microservices (docker + kubernetes, OpenAPI + light4j framework).

Even though we are in the early stages of redesign, I’m already seeing some drawbacks and challenges that just didn’t exist before:

- Performance. Each of the services talks to the other service via well-defined JSON interface (OpenAPI/Swagger yaml definitions). This sounds good in theory, but parsing JSON and then serializing it N times has a real performance cost. In a giant “monolith” (in the Java world) EJB talked to each other, which despite being java-only (in practice), was relatively fast, and could work across web app containers. In hindsight, it was probably a bad decision to JSON-ize all the things (maybe another protocol?)

- Management of 10-ish repositories and build jobs. We have Jenkins for our semi-automatic CI. We also have our microservices in a hierarchy, all depending on a common parent microservice. So naturally, branching, building and testing across all these different microservices is difficult. Imagine having to roll back a commit, then having to find the equivalent commit in the two other parent services, then rolling back the horizontal services to the equivalent commit, some with different commit hooks tied to different JIRA boards. Not fun.

- Authentication/Authorization also becomes challenging since every microservice needs to be auth-aware.

As I said we are still early in this, so it is hard to say if we reduced our footprint/increased productivity in a measurable way, but at least I can identify the pitfalls at this point.


The first thing I start trying to convince people to ditch on any internal service is JSON.

There are only two times when JSON is a particularly good choice: when it's important for the messages themselves to be human-readable, or when it's important to be able to consume it from JavaScript without using a library. Any other time, something like protocol buffers is going to give you lower latency, lower bandwidth requirements, lower CPU costs, lower development effort, less need for maintaining documentation, and better standardization.

If you ditch the HTTP stuff while you're at it, you can also handily circumvent all the ambiguities and inter-developer holy wars that are all but inherent to the process of taking your service's semantics, whatever they are, and trying to shoehorn them into a 30-year-old protocol that was really only meant to be used for transferring hypermedia documents. Instead you get to design your own protocol that meets your own needs. Which, if you're already building on top of something like protobuf, will probably end up being a much simpler and easier-to-use protocol than HTTP.


Not to mention, JSON APIs start out simple because you just call `to_json` on whatever object you need to share and then move on. Except nobody’s ever documented the format, and handling ends up taking place in a million different places because you can just pass the relevant bits around. The API is “whatever it did before we split it up”.

Now when someone goes to replace one side, it’s often impossible to even figure out a full definition of the structure of the data, much less the semantics. You watch a handful of data come across the pipe, build a replacement that handles everything you’ve seen, and then spend the next few months playing whack-a-mole fixing bugs when data doesn’t conform to your structural or semantic expectations.

JSON lets you get away with never really specifying your APIs, and your APIs often devolve to garbage by default. Then it becomes wildly difficult to ever replace any of the parts. JSON for internal service APIs is unmitigated evil.


You could just make a shared object module that is versioned and have all the microservices use those objects to communicate. Or you could implement avro schemas. Many ways around this issue.


I don't know if I would recommend NOT using json when starting out (I'm on the side of pushing for a monolith for as long as possible), but yes omg I've been there.

I've migrated multiple services out of our monolith into our micro service architecture and oh boy, it is just impossible to know (or find someone who knows) what structure is passed around, or what key is actually being used or not. Good luck logging everything and pulling your hair documenting everything from the bottom up.


> Except nobody’s ever documented the format,

That's hardly a JSON problem. You still experience that problem if you adopt any undocumented document format or schema.


I think the point the OP was making was more about the tooling than the format. Having a library that forces you to define the format you're using, instead of dumping to_json whatever internal representation you had, also doubles as documentation.

But then, of course, that can be considered boilerplate code (and, in the beginning and most of the time, it actually is just a duplication of your internal object structure).


That, and having the format definition file support comments means that you have a convenient all-in-one place where, at lest in the more straightforward cases, you can handily describe both the message format and the service's behavior in a single file that's easy to share among all your teams.


Precisely.

The easiest path with JSON is to do none of this, and so the majority of teams (particularly inexperienced ones) do none of it. With protos, someone must at least sit down and authoritatively outline the structure of any data being passed around, so at a minimum you’ll have that.

But even just forcing developers to do this generally means they start thinking about the API, and have a much higher chance of documenting the semantics and cleaning up parts that might otherwise have been unclear, confusing, or overly complex.


I’m surprised that bencoded dictionaries haven’t ever become popular outside of bittorrent. They are expressive, fast, and easy to implement.


Man if you think protobuf is fast, just wait until you try C pod structs with type-length headers.

They are literally infinitely faster to encode/decode than protobuf.

They even have the same obnoxious append-only extensibility of protobuf if that’s what really gets your jimmies firing.


FlatBuffers and then Endian concerns are gone.


Are FlatBuffers essentially Capnproto without the RPC?


FlatBuffers can be used with gRPC, so they shouldn't need a separate RPC ecosystem like Cap'n Proto: https://grpc.io/blog/flatbuffers


Right, but I'm asking if FlatBuffers' approach to encoding is similar to Cap'n Proto. It seems like the answer is "yes", but I might be missing something.


They are both zero-copy, but the designs have a number of differences. I compared them back in 2014: https://capnproto.org/news/2014-06-17-capnproto-flatbuffers-...

Disclaimers:

1) I'm the author of Cap'n Proto; I'm probably biased.

2) A lot could have changed since 2014. (Though, obviously, serialization formats need to be backwards-compatible which limits the amount they can change...)


Cool, thanks for this! I'm happily using Cap'n Proto in a side project, and so far have really enjoyed working with it. It's a really impressive piece of engineering.


asn1 for the win


It's a shame you're catching downvotes for mentioning ASN1. It's so much better than all these newfangled things that it makes them look like toys.


I'm not sure why ASN.1 isn't more popular for a zero-copy encoding system


What serialization format do you all recommend when starting out? Protobufs seems to be the go-to, but what about Cap'n Proto etc?


(personal opinion, not speaking on behalf of my employer)

Start out with protobufs, so you can take advantage of gRPC[1] and all of its libraries and all the other tooling that is out there.

If you profile and determine that serialization/deserialization is actually a bottleneck in your system in a way that is product-relevant or has non-trivial resource costs, then you can look at migrating to FlatBuffers, which can still use gRPC[2].

[1] https://grpc.io/ [2] https://grpc.io/blog/flatbuffers


Ah, good tip, thanks!


I'd agree on starting with protobufs, precisely because it's the go-to. Other options have plenty of advantages, but none are as widely supported.


> So naturally, branching, building and testing across all these different microservices is difficult. Imagine having to roll back a commit, then having to find the equivalent commit in the two other parent services, then rolling back the horizontal services to the equivalent commit

that should not happen. if it does you don't have a microservice architecture, you have a spaghetti service architecture.


How does one know if X is the wrong solution or if X is the right solution but the shop is "doing X wrong"? This also applies to monoliths: maybe they can scale, but one is doing them wrong. Changing from doing monoliths wrong to doing microservices wrong is obviously not progress.

The same issue appeared when OOP was fairly new: people started using it heavily and ended up making messes. They were then told that they were "doing it wrong". OOP was initially sold as magic reusable Lego blocks that automatically create nice modularity. There was even an OOP magazine cover showing just that: a coder using magic Legos that farted kaleidoscopic glitter. Microservices is making similar promises.

It took a while to learn where and how to use OOP and also when not to: it sucks at some things.


Simple:

If X is a technology I don't like, and it's not working for you, then it's the wrong solution.

If X is a technology I don't like, and it is working for you, then you simply haven't scaled enough to understand its limitations.

If X is a technology I like, but it's not working for you, then your shop is "doing X wrong".

If X is a technology I like, and it's working for you, then it's the right solution and we're both very clever.


edit: See below


Uh, my post was snark towards engineers' general tendency to champion and defend their own favourite architecture over competing approaches rather than focusing on the most suitable architecture for the circumstance, in response to this question:

> How does one know if X is the wrong solution or if X is the right solution but the shop is "doing X wrong"?

Edit: And a civil conversation ensued. :)


Sorry, I just misread your post as being more of an attack. As I had said, I dislike the constant "hype means something is bad" posts that I've been seeing for years - I think it's really unfortunate.


No worries, I can see how it might have been read that way in the wider context of the conversation. And I totally hear you on the "hype equals bad" thing. If something's popular, it's popular for a reason. That reason MIGHT just be trendoids jumping on a hype train, but it might also be because the thing is good.


If you want to learn "the right way" I highly recommend "Building Microservices: Designing Fine-Grained Systems" by Sam Newman[1]

[1] https://smile.amazon.com/Building-Microservices-Designing-Fi...


I have posted about this before. OP is describing nano-services, not micro-services. Nano-service is micro-service that provides leftpad via JSON api.

You have an auth.yourapp.com and api.yourapp.com and maybe tracer.yourapp.com and those three things are not a single app that behaves like auth, api or tracer depending on setting of a NODE_ENV variable? If so, you have micro services.


> that should not happen. if it does you don't have a microservice architecture, you have a spaghetti service architecture.

A "service" is not defined principally by a code repository or communications-channel boundary (though it should have the latter and may have the former), but by a coupling boundary.

OTOH, maintaining a coupling boundary can have non-obvious pitfalls; e.g., supported message format versions can become a source of coupling--if you roll back a service to the version that can't send message format v3, but a consumer of the messages requires v3 and no longer supports consuming v2, then you have a problem.


you should never actually remove an api without a deprecation process. basically a message format is part of your api. actually in a microservice world your interfaces/api should be as stable as possible. if something gets changed, it needs to go trough a deprecation, that means not removing it until a certain amount of time.


private apis exist for a reason. going through the deprecation cycle repeatedly sounds like a waste of time if you control all sides of the system. which is to say, not every interface needs to be a service boundary. you do need to keep versioned apis, but there is a cost to doing so.


IMHO, part of a successful microservices program is treating your "private" APIs like your public ones, with SLAs, deprecation protocols, etc. Otherwise, you end up with tight coupling to the point that you should just go with a monolith.


When you are still figuring out the problem space and message formats are being created removed extended and reverted several times a sprint, and you have already gone live with third parties depending on stable interfaces so that you cannot make the changes you need in a timely fashion, the warm arms of a purpose built monolith look particularly attractive.


Indeed. At that early stage of development, a monolith (or at least monolith-style practices, such as tight coupling between API providers and consumers) is definitely simpler and more efficient. But it wouldn't hurt to take steps from the beginning to make it easier to break the monolith apart when/if it becomes pragmatic to do so.


> private apis exist for a reason. going through the deprecation cycle repeatedly sounds like a waste of time if you control all sides of the system. which is to say, not every interface needs to be a service boundary.

The whole point of a microservice is to create a service boundary. If you have a private interface where both sides are maintained by the same team, both sides should be in the same service.


All interfaces are service boundaries. The only difference is whether you controll all clients and servers or not. Either way, API versioning is a must and trivial to implement, and there is really no excuse to avoid it. All it takes is a single integration problem caused by versioning problemd to waste more time than it takes to implememt API versioning.


The accept header could fix this easily if no third parties are involved and if it's a rest like API protocol


The accept header is a mechanism for communicating formats you support; it doesn't do anything to address the problem of managing change to supported versions, which is a dev process issue, not a technical issue.


Feature flags, just enable it.

When all applications are adjusted, the accept-headers request a protobuff format in return.

=> Propagated everywhere except when a js ajax calls happens to the api-gateway.


> The accept header is a mechanism for communicating formats you support; it doesn't do anything to address the problem of managing change to supported versions

That assertion is not true. Media type API versioning is a well established technique.


The accept header could fix this easily if no third parties are involved


It sounds like you're paraphrasing the great Steven Jobs: "You're holding it wrong"

Be that as it may, I believe mirkules's issue is not an uncommon one. Perhaps saying "building a microservice architecture 'the right way' is a complex and subtle challenge" would capture a bit of what both of you are saying.

Something being complex and therefor easy to mess up does not mean it's a great system and the users are dumb, especially if there are other (less complicated, less easy to mess up) ways to complete the task.


> Perhaps saying "building a microservice architecture 'the right way' is a complex and subtle challenge" would capture a bit of what both of you are saying.

Supporting API versioning is not a complex or subtle challenge. It's actually a very basic requirement of a microservice architecture. It's like having to test null pointers: it requires additional work but it's still a very basic requirement if you don't want your app to blow up in your face.


Heh, I like that “Spaghetti Microservices”.

You are right. It should not happen. It is difficult to see these pitfalls when unwinding an unwieldy monolith, and, as an organization all you’ve ever done are unwieldy monoliths, that have a gazillion dependencies, interfaces and factories.

We learned from it, and we move on - hopefully, it serves as a warning to others.


I've heard the term 'Distributed Monolith'


I think we should call it angel-hair pasta code. You know, because it's very small spaghetti.


MicroPasta


Tangled angelhair.


No, it's ravioli oriented architecture


To handle deploys and especially rollbacks you need a working CI, or rather CD chain. Where eveything is automated. In there are dependencied in your architecture, all of that needs to be part of the same deploy so you can redo the deploy before that one. With a monolith, you get a simple deploy of all dependencies, as they are baked into one thing. There are down sides of everything baked into one thing. Having all your "micoro" servies deplyed as one package, would make the dependency you have act as before the monolith change. Seeing this kinds of dependency imho even in a monolith are an architechture problem, that in the monolith shows up at code changed and fixes can't be handles by localized fixes, but changed have to go into large parts of the code base.


I think a bigger thing here, is that each deploy should have a REALLY good reason to not be backwards compatible for some pretty long time period. If that requirement is painful for you, then you probably have two pretty tightly coupled services.


Why so many microservices?

Monolith has huge advantage when maybe your code is like 100k lines or below:

1. Easy cross module unit testing/integration testing, thus sharing components is just easier.

2. Single deployment process

3. CR visibility automatically promotes to all parties of interests, assuming the CR process is working as desired.

4. Also, just a personal preference, easier IDE code suggestion. If you went through json serializing/de-serializing across module boundary, type inference/cohesion is just out-of-reach.

And it is not like monolith doesn't have separation of concern at all. After all, monolith can have modules, and submodules. Start abstracting using file system API, but grouping relevant stuff into folders, before put them into different packages. After all, once diverging, it is really hard to go back.

Unless you have a giant team and more than enough engineers to spare for devops. Micrservices can be considered as a organizational premature optimization.


JSON parsing IS expensive. Way more expensive than many people realize. There is actually an almost-underground JSON parser SCENE in the .Net ecosystem where people develop new parser and try to squeeze the maximum performance using all sorts of tricks: https://github.com/neuecc/Utf8Json . Here there is discussion of needing JSON support in .Net core that's faster than Json.Net: https://github.com/dotnet/announcements/issues/90 .

And people say web applications are never CPU bound :)


gRCP is a little bit faster, Google created protobuff which should be easier to migrate to than gRPC, but the protocol is unreadable ( binary)...

JSON has its advantages, I prefer a feature flag when I need performance ( protobuff vs Json), http headers do the rest


gRPC is built on top of protobuf. It’s literally just RPC with protobufs.


one could argue that if serialization/deserialization is the majority of what your api is spending time on then you’ve got distributed monoliths and shouldn’t be making the call in the first place. also ensuring you have the fastest json parser before having a real problem is premature optimization


> JSON parsing IS expensive

Well, maybe

https://news.ycombinator.com/item?id=19214387


"turned a team of 15 engineers from managing one giant monolith to two teams managing about 10 or so microservices"

Knowing absolutely nothing about your product, this sounds like a bad way to split up your monolith.

Probably something like 5 teams of 3 each managing 1 microservice would be a better way to split things up. That way each team is responsible for defining their service's API, and testing and validating the API works correctly including performance requirements. This structure makes it much less likely services will change in a tightly coupled way. Also, each service team must make sure new functionality does not break existing APIs. Which all make it less likely to have to roll back multiple commits across multiple projects.

The performance issues you cite, also seem to indicate you have too many services, because you are crossing service boundaries often, with the associated serialization and deserialization costs. So each service probably should be doing more work per call.

"all depending on a common parent microservice"

This makes you microservices more like a monolith all over again, because a change in the parent can break something in the child, or prevent the child from changing independently of the parent.

Shared libraries I think are a better approach.

"Authentication/Authorization also becomes challenging since every microservice needs to be auth-aware."

Yes, this is a pain. Because security concerns are so important, it is going to add significant overhead to every service to make sure you get it right, no matter what approach you use.


Probably more like do your best to reason about the boundaries of each bounded context and pick an appropriate number of services based off that analysis?

Surely splitting up your application along arbitrary lines based on advice of an internet stranger whose never seen the application and doesn't know the product/business domain just isn't sound way of approaching the problem.


in the end, your architecture will reflect your organigram. (IIRC it's Conway's law?)


You are correct.

Conway's Law is profound. Lately I realized even the physical office layout (if you have one) acts as an input into your architecture via Conway's Law.


We use grpc instead of rest for internal synchronous communication but we've also found that by using event pub/sub between services, that there are not many uses cases where we have direct calls between services.

We used to have a parent maven pom and common libraries but got rid of most of that because it caused too much coupling. Now we create smaller more focused common libraries and favor copy/paste code over reuse to reduce coupling. We also moved a lot of the cross cutting concerns into Envoy so that the services can focus on business functionality.


> favor copy/paste code over reuse to reduce coupling.

This looks like a big step backwards to me.


I would say it depends on the stability of what's being copy/pasted. If it's just boilerplate, it's less concerning.

In my opinion, decoupling should be prioritized over DRYness (within reason). A microservice should be able to live fairly independently from other microservices. While throwing out shared libraries (which can be maintained and distributed independently from services) seems like overkill, it seems much better than having explicit inheritance between microservice projects like the original poster is describing.


Sure, no problem with boilerplate.

For any non trivial code, which needs to be maintained and be kept well tested, to the contrary of the OP, I would favor shared libraries over copy/paste.


How do you handle cases where your client is awaiting a response with a decoupled pub/sub backend? E.g a user creates an account and the client needs to know their user id.

Would that user object be the responsibility of one service, or written to many tables in the system under different services, or...?


For one, you could use something like snowflake IDs so that whatever server receives the user data first can generate and return an id for that user before tossing the data on a queue to be processed.


How would you approach a situation where a client updates a record in service A, and then navigates to a page whose data is returned by service B, which has a denormalized copy of service A's records that hasn't consumed and processed the "UpdatedARecord" event?

Do we accept that sometimes things may be out of sync until they aren't? That can be a jarring user experience. Do we wait on the Service B event until responding to the client request? That seems highly coupled and inefficient.

I'm genuinely confused as to how to solve this, and it's hard to find good practical solutions to problems real apps will have online.

I suppose the front end could be smart enough to know "we haven't received an ack from Service B, make sure that record has a spinner/a processing state on it".


You use eventing only when eventual consistency is acceptable. In your scenario, it sounds like it is not. So then you should use synchronous communication to ensure the expected experience is met. However, that also means that now you can't do stuff with service B without service A being up. So you're trading user experience against resiliency.

Also, you should check your domains and bounded contexts and reevaluate whether A and B are actually different services. They might still legitimately be separate. Just something to check.


Some people advocate that microservices own their data and only provide it through an API. In this scenario, Service B would need to query Service A for the authoritative copy of the record. I think the standard way to deal with the query and network time is, yes, to wait until Service A provides the data and timeout if it takes "too long".

Then your question is about optimizing on top of the usual architecture which hopefully is an infrequent source of pain that is worth the cost of making it faster. I could imagine some clever caching, Service A and Service B both subscribing to a source of events that deal with the data in question, or just combining Service A and B into one component.


I would create the account directly from the initial call and return its ID and then publish an account created message. Any other services could receive the message and perform some action such as send a welcome email or do some analytics.


gRPC and Envoy are exactly the things we are exploring now, although copy/paste would never fly in our org.


I know copy paste is looked down upon, but it's even suggested as a good practice (a little bit) in golang. A little copying is better than a little dependency.

https://go-proverbs.github.io/


Consider larger services than a typical microservice.

For the same reason monoliths tend to split when the organization grows, it is often more manageable to have a small number of services per team (ideally 1, or less).

It's ok if a service owns more than one type of entity.

It's less good if a service owns more than one part of your businesses' domain, however


> Consider larger services than a typical microservice.

People seem to forget that there’s a continuum between monolith and microservices, it’s not one or the other.

Multiple monoliths, “medium”-services, monolith plus microservices, and so on are perfectly workable options that can help transition to microservices (if you ever need to get there at all).


That's fine, but reorganizations happen, teams can grow, and there is an advantage to having things be separate services in cases like this.

Definitely don't just stuff unrelated stuff into a service since a team that normally deals with that service is now working on unrelated stuff. If the unrelated stuff takes off, you now have two teams untangling your monolithic service.

That said, I'm a big fan of medium sized services, the kind of thing that might handle 10 or 20 different entities.


I'm going to go out on a limb here and suggest that parsing (and serializing) JSON is unlikely to be the actual problem, performance-wise. (Although "OpenAPI/Swagger" doesn't fill me with enthusiasm.)

More likely, I suspect, is that either you are shipping way too much data around, you have too much synchrony, or some other problem is being being hidden in the distribution. (I once dealt with an ESB service that took 2.5 seconds to convert an auth token from one format to another. I parallelized the requests, and the time to load a page went from 10 sec to <3; then I yanked the service's code into our app and that dropped to milliseconds.)

Performance problems in large distributed systems are a pain to diagnose and the tools are horrible.


I've been using NewRelic and it's been wondrous around illuminating performance problems.


The whole point of doing microservices is so that you can split up processing responsibility boundaries reasonably, and each team is responsible for being an "expert" in the service it's responsible for.

This also means that each service should have no other services as dependencies, and if they do, you have to many separate services and you should probably look into why hey aren't wrapped up together.

Using a stream from a different service is one thing: You should have clearly defined interfaces for inter-service communication. But if updating a service means you also need to fix an upstream service, your doing it wrong and are actually causing more work than just using a monolith.

EDIT: and because you have clearly defined interfaces, these issues with updating one service and affecting another service literally cannot exist if you've done the rest correctly.


Perhaps a few ideas:

- Performance: use gRPC/protobuf instead of HTTP/OpenAPI, really not much of a reason to use HTTP/OpenAPI for internal endpoints these days

- Repo Management: No one is stopping you from using a monorepo but yourselves :)


Even just defining what “internal communication” means is difficult. We definitely suffer from the what-if syndrome- “what if some day we want to expose this service to a client?”

Our product is a collection of large systems used by many customers with very different requirements - and so we often fall into this configurability trap: “make everything super configurable so that we don’t have to rebuild, and let integration teams customize it”


Ah. In that case, you can expose your gRPC endpoints as traditional JSON/HTTP ones with gRPC-Gateway, which supports generating OpenAPI documentation too! Best of both worlds.


Sounds like you're suffering from not using YNGNI enough: you're not gonna need it. Build what you need now. When that changes, you can change what you built. That was the original intention of Agile methodologies and BDD or TDD. When the tests pass, you're done.


You must be doing something wrong. In a company I work for, we clearly have separated microservices by bounded context, thus making them completely decoupled.


This is the key - decoupled. When you have to rollback commits across multiple service, they are not decoupled, and you're doing something wrong.

Each service should be fully independent, able to be be deployed & rolled-back w/o other services changing.

If you're making API changes, then you have to start talking about API versioning and supporting multiple versions of an API while clients migrate, etc.


>then you have to start talking about API versioning and supporting multiple versions of an API while clients migrate, etc.

Which adds some more complexity that just does not exist in monolythic architecture


Sure it does. Once you have a large monolith, the same coordination problems hit. Just ask the Linux kernel developers: https://lwn.net/Articles/769365/


> This sounds good in theory, but parsing JSON and then serializing it N times has a real performance cost.

It's not just the serialization cost but latency (https://gist.github.com/jboner/2841832) as well, every step of the process adds latency, from accessing the object graph, serializing it, sending it to another process and/or over the network, then building up the object graph again.

The fashion in .net apps used to be to separate service layers from web front ends and slap an SOA (the previous name for micro-services) label on it. I experimented with moving the service to in process and got an instant 3x wall clock improvement on every single page load, we were pissing away 2/3rds of our performance and getting nothing of value from it. And this was int the best case scenario, a reasonably optimized app with binary serialization and only a single boundary crossing per user web request.

Other worse apps I've worked on since had the same anti-pattern but would cross the service boundary dozens/hundreds/thousands of times and very simple pages would take several seconds to load. It's enterprise scale n+1.

If you want to share code like this then make a dll and install it on every machine necessary, you've got to define a strict API either way.


Don't forget:

- Logging. All messages pertaining to a request (or task) should have a unique ID across the entire fleet of services in order to follow the trail while debugging.


"correlation id" is probably the best thing to poke into Google for guidance on it.


I would recommend investigating APM, opentracing and Uber’s jaeger project.


That's really a list of how not to develop microservices, or even software...

Thoughts must obviously be given to protocols. Json is an obvious bad choice for this use case...

The point of microservices is loose coupling, including in the code. Having a code hierarchy negates this and arguably is bad practice in general.


I don't think it's necessarily the best idea to immediately have almost as many services as you have engineers. There are usually more gradual and logical ways to split things up.


> We also have our microservices in a hierarchy, all depending on a common parent microservice.

Can you explain this a bit more? I thought the point was to have each service be as atomic as possible, so that a change to one service does not significantly impact other services in terms of rollbacks/etc.

If I'm wrong here let me know, our company is still early days of figuring out how to get out of the problems presented by monolith (or in our case, mega-monolith).


These are excellent points. I was just having a conversation yesterday about system design and how there seems to be a tipping point after which the transactional/organizational costs of segmenting services outweigh the benefits.

My unscientific impression is that some of the organizational costs - just keeping the teams coordinated and on the same page - can become even more "expensive" than the technical costs.


Protocol buffers exist to solve two problems you list, parsing overhead and backwards compatibility.


I always wondered why ASN.1 never really seemed to take off. Tooling maybe?


Yes - both the protocol and tooling are expensive to support, even by 90s standards. Unless you have to use it for compatibility with a service you can’t fix it’d be much better to start with a better implementation of the concept.


ASN.1 has a number of problems with equivalent parsings (very bad in a security context, has been the source of a number of TLS vulnerabilities), as well as the fact that even discovering if two ASN.1 parsers will give the same result is undecidable.


I agree with all those drawbacks, but auth is something that can be handled with a bit of one time engineering effort. Where I am all traffic to microservices comes through an API gateway which is responsible for routing traffic to the correct service, and more importantly authorising access to those endpoints. Once the gateway has completed auth it places a signed JWT in the Authorisation header, at which point the microservice's responsbility goes from handling the entire auth process to checking the signature can be verified.


> Management of 10-ish repositories and build jobs.

Does each micro service have to live in it's own repository? Especially with a common library everyone uses?


I think having a common library that does anything specific to your product is something of an anti-pattern in micro services.

Its not really a micro service - its a distributed monolith


There are always going to be cross-cutting concerns. You don't want to have every microservice implement it's own authentication, auditing, etc. from the ground up.


I really hate the term 'microservice', because it carries the implication that each service should be really small. In reality, I think the best approach is to choose good boundaries for your services, regardless of the size.

People forget the original 'microservice': the database. No one thinks about it as adding the complexity of other 'services' because the boundaries of the service are so well defined and functional.


I really like this example. A lot of databases have very good module separation internally. However, you don't often see people splitting out query planning, storage, caching, and etc into separately hosted services forced to communicate over the network; even in modern distributed databases.


Meanwhile, you also don’t see a lot of people claiming you should have one single repository that stores the source code, configs, CI tooling, deployment tooling, etc., for Postgres and Mathematica and the Linux kernel and the Unity engine, or that operating any one of these kinds of systems should have anything to do with running any other system apart from declared interfaces through which they might choose to optionally communicate or rely on each other as black box resources.


Funny, but we saw a debate around monolithic codebases and the monolithic image in Smalltalk.

A team of three engineers orchestrating 25 microservices sounds insane to me. A team of of thirty turning one monolith into 10 microservices and splitting into 10 teams of three, each responsible for maintaining one service, is the scenario you want for microservices.

A team size of 10 should be able to move fast and do amazing things. This has been the common wisdom for decades. Get larger, then you spend too much time communicating. There's a reason why Conway's Law exists.

https://en.wikipedia.org/wiki/Conway%27s_law


I don't think Martin Fowler realized when he wrote the first microservices article that he'd stumbled upon a technical solution to a political problem. He just saw it work and wanted to share.


I don't think Martin Fowler realized when he wrote the first microservices article that he'd stumbled upon a technical solution to a political problem. He just saw it work and wanted to share.

The generation of programmers that Martin Fowler is from, are exactly the people from whom I got my ideas around how organization politics effect software and vice versa. There was plenty of cynicism around organization politics back then.


And it's not as if Martin Fowler came up with the idea as an original either, QnX, Erlang and many other systems used those basic ideas much earlier (sometimes decades earlier). But this is the web, where the old is new again.


It’s safe to say that Fowler rarely claims to originate things. He’s more of a taxonomist.


He says so himself:

> We do not claim that the microservice style is novel or innovative, its roots go back at least to the design principles of Unix.


It's interesting how few executives understand this come reorganization time.


The architecture of software comes to resemble the organization writing the software. Use that fact or it will use you.


Indeed! Conway's the man. Unless your "service" corresponds to an actual existing team who has the time and authority to focus on it, you are asking for trouble and/or wasteful busywork. I curse misapplied microservices.

By the way, for a relatively small service to be shared by multiple applications, try RDBMS stored procedures first.


Agreed. And there's no reason why your monolith need become 10 microservices. It could be split into say just 3 if that makes more sense.


We have a running joke that we run macroservices, which is really just a 4 way split of our monolith once the team grew large enough.


That's a pretty good way of doing it, decouple what you have to but no more than that.


> Funny, but we saw a debate around monolithic codebases and the monolithic image in Smalltalk.

Was there a consensus resolution?


Was there a consensus resolution?

Smalltalk is awesome. Everyone else is doing it wrong, those dirty unwashed!

https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporativ...


Images are hard to version control.


Not when using a Smalltalk aware version control.


> Funny, but we saw a debate around monolithic codebases and the monolithic image in Smalltalk.

What reasons do you have for making that link? What are you refering to?

It's possible to load some code and snapshot as a Smalltalk image; then load some different code and snapshot as a different Smalltalk image.


What reasons do you have for making that link? What are you refering to?

It's possible to load some code and snapshot as a Smalltalk image; then load some different code and snapshot as a different Smalltalk image.

It's a different story when you're working on a team, and a different story when there are two or more teams using the same repository. Sure, you still have the image. The debate had to do with how the Smalltalk image affected the community's relationship to the rest of the world of software ecosystems, and how the image affected software architecture in the small. That "geography" tended to produce an insular Smalltalk community and tightly bound architecture within individual projects.


It's a different story when you have 3 or 4 code librarians responsible for a repository that's used by a dozen teams (ENVY/Developer).

> … relationship to the rest … insular Smalltalk community…

Perhaps not the image per se, so much as the ability to change anything and everything.

Every developer could play god; and they did.


Every developer could play god; and they did.

Turns out that not every god is as wise and as benevolent as every other god.


That's exactly the point!

There were awesome people who did awesome stuff; and there were others — unprepared to be ordinary.


Did anyone ever build a multi-image Smalltalk? For a lot of stuff it wouldn't make sense, but having the ability to separate the image.


Did anyone ever build a multi-image Smalltalk?

People at least played around with that as a research project. There's one that showed up at the Camp Smalltalks I went to, with a weird-but-sensible sounding name. (Weird enough I can't remember the name.)

There would have been great utility in such a thing. For one thing, the debugger in Smalltalk is just another Smalltalk application. So what happens when you want to debug the debugger? Make a copy of the debugger's code and modify the debugger hooks so that when debugging the debugger, it's the debugger-copy that's debugging the debugger. With multi-image Smalltalk, you could just have one Smalltalk image run the debugger-copy without doing a bunch of copy/renaming. (Which, I just remembered, you can make mistakes at, with hilarious results.)

If you do the hacky shortcut of implementing one Smalltalk inside another Smalltalk (Call this St2), then the subset of objects that are the St2 objects can act a bit like a separate image. In that case, the host Smalltalk debugger can debug the St2 debugger.


What do you mean by a multi-image Smalltalk?


A Smalltalk with multiple, separate images loaded at the same time.


I suspect that I still don't understand what you're really asking. Do you imagine those "multiple, separate images" would run in the same OS process?

Otherwise — [pdf] "Distributed Smalltalk"

http://www.cincomsmalltalk.com/main/documentation/VisualWork...

Otherwise (for source code control) — "Mastering ENVY/Developer"

https://books.google.com/books?id=ld6E19QIMo4C


When I bring up Smalltalk, I get an all-in-one environment from the image I load. Its live and any code I add goes into that image. Now I can use code control and build specific images, but it pretty much is a one image at a time world.

What I'm talking about is loading up multiple images into the same IDE and run them like fully separate images with maybe some plumbing for communication and code loading between them. You can sorta pull that stunt by, as stcredzero mentioned, running Smalltalk in Smalltalk, ut I want separate images.



Cool, but that looks like a remoting tool not a lot of VMs on my desktop.


> …loading up multiple images into the same IDE…

At the same time? Why? What will that let you do?


It would let me run a network of VMs with different code that could model my whole solution at once, locally.


> locally

Meaning on a single machine. Not across networks.

> run a network of VMs with different code

What do you think prevents that being done with "fully separate images" (VMs in their own OS process) ?


The last time I played with a Smalltalk, all the code was one big image. There was no way to run multiple VMs.


> There was no way to run multiple VMs.

In this example on Ubuntu "visual" is the name of the VM file, and there are 2 different image files with different code in them "visualnc64.im" and "quad.im".

    $ /opt/src/vw8.3pul/bin/visual /opt/src/vw8.3pul/image/visualnc64.im &
    [1] 8689 

    $ /opt/src/vw8.3pul/bin/visual /opt/src/vw8.3pul/image/quad.im &
    [2] 8690
That's created 2 separate OS processes, each OS process is running an instance of the Smalltalk VM, and each Smalltalk VM opened a different Smalltalk image containing different code.

Do you see?


I’m not sure if we are talking past each other or you are ignoring the whole IDE thing. Yes, I can run multiple VMs on the same machine, but you are missing that I want to spin up these VMs in my Smalltalk IDE and not via some terminal launch script. I want my environment there for me to edit and debug code. I’m pretty sure you cannot do that in VisualWorks.


> I want my environment there for me to edit and debug code

Both of those instances of the Smalltalk VM, the one in OS process 8689 and the one in OS process 8690, are headfull — they both include the full Smalltalk IDE, they are both fully capable of editing and debugging code.

(There's a very visible difference between the 2 Smalltalk IDEs that opened on my desktop: visualnc64.im is as-supplied by the vendor; quad.im has an additional binding for the X FreeType interface library, so the text looks quite different).

(iirc Back-in-the-day when I had opened multiple Smalltalk images I'd set the UI Look&Feel of one to MS Windows, of another to Mac, of another to Unix: so I could see which windows belonged to which image.)


Yeah, but they are 2 IDEs not a single IDE. You are running two copies not one copy with two instances. I then need to jump between programs to edit code.


So when I asked "What will that let you do?", the only "benefit" you-can-think-of is the possibility of switching from editing code in visualnc64.im to editing code in quad.im without a mouse-click ?


So when I asked "What will that let you do?", the only "benefit" you-can-think-of is the possibility of switching from editing code in visualnc64.im to editing code in quad.im without a mouse-click?

No, that would not be enough to make anything work. What I can think of is an IDE that had access to all the VMs running and some plumbing for the VMs to communicate. I would love to be able to spin-up Smalltalk VMs so I can simulate a full system on my desk. Having separate IDEs running means I don't have any integration so I have to debug in multiple different IDEs when tracing communications. I can imagine some of the debugging and code inspection that could be extended to look at code running simultaneously in multiple VMs.


Already mentioned up thread — Distributed Smalltalk.

"Open a debugger where you can trace the full stack on all involved machines."

"Inspect objects in the debugger or open inspectors on any of the objects, regardless of the system they are running on."

April 1995 Hewlett Packard Journal, Figure 7 page 90

https://www.hpl.hp.com/hpjournal/95apr/apr95a11.pdf


I want it all, not just debugging. Distributed Smalltalk didn't do it all in one IDE.


Microservices are interesting.

Not Technically, as they increase complexity.

But they enable something really powerful: continuity of means, continuity of responsibility, that way a small team has full hand of developing AND operating a piece of a solution.

Basically, organization tends to be quite efficient when dealing with small teams (about dozen people, pizza rule and everything), that way information flows easily, with point to point communication without the need of a coordinator.

However, with such architecture, greater emphasis should be put on interfaces (aka APIs). A detailed contract must be written (or even set as a policy):

* how long the API while remain stable?

* how will it be deprecated? with a Vn and Vn-1 scheme?

* how is it documented?

* what are the limitations? (performance, call rates, etc)?

If you don't believe me, just read "Military-Standard-498". We can say anything about military standards, but military organizations, as people specifying, ordering and operating complex systems for decades, they know a thing or two about managing complex systems. And interfaces have a good place in their documentation corpus with the IRS (Interface Requirements Specification) and IDD (Interface Design Description) documents. Keep in mind this MIL-STD is from 1994.


According to Wikipedia, Military Standard 498 has been replaced with ISO/IEC/IEEE 12207. Do you have any experience with that? Do you have experience with any other modern standards for software development?


Not really, it's something I was confronted to when I was working on military contracts a few years ago.

From what I recall, it's very waterfall minded in term of specification workflow, it's also quite document heavy, and the terminology and acronyms can take a while to get used to.

I found it was a bit lacking regarding how to put together all the pieces into a big system, aka the Integration step. IMHO It's a bit too software oriented, lacking on the system side of thing (http://www.abelia.com/498pdf/498GBOT.PDF page 60).


Thanks for the source.


10 teams of 3 each owning their own little slice of the pie sounds like an organizational nightmare; mostly, you can't keep each team fully occupied with just that one service, that's not how it works. And any task that touches more than one microservice will involve a lot of overhead with teams coordinating.

While I do feel like one team should hold ownership of a service, they should also be working on others and be open to contributions - like the open source model.

Finally, going from a monolith to 10 services sounds like a bad. I'd get some metrics first, see what component of the monolith would benefit the most (in the overall application performance) from being extracted and (for example) rewritten in a more specialized language.

If you can't prove with numbers that you need to migrate to a microservices architecture (or: split up your application), then don't do it. If it's not about performance, you've got an organizational problem, and trying to solve it with a technical solution is not fixing the problem, only adding more.

IMO, etc.


"10 teams of 3 each owning their own little slice of the pie sounds like an organizational nightmare; mostly, you can't keep each team fully occupied with just that one service, that's not how it works. And any task that touches more than one microservice will involve a lot of overhead with teams coordinating."

I guess that's where the critical challenge lies. You'd better be damn sure you know your business domain better than the business itself! So you can lay down the right boundaries, contracts & responsibilities for your services.

Once your service boundaries are laid down, they're very hard to change

It takes just one cross-cutting requirement change to tank your architecture and turn it into a distributed ball of mud!


Which has to stand as a damning indictment of the one-service-per-team model, surely?

Something so inflexible can't survive contact with reality (for very long).

At work we run 20-something microservices with a team of 14 engineers, and there's no siloing. If we need to add a feature that touches three services then the devs just touch the three services and orchestrate the deployments correctly. Devs wander between services depending on the needs of the project/product, not based on an arbitrary division.


Well, THERE'S your problem!

If you are doing http/json between microservices then you are definitely holding it wrong.

Do yourself a favor and use protobuf/grpc. It exists specifically for this purpose, specifically because what you're doing is bad for your own health.

Or Avro, or Thrift, or whatever. Same thing. Since Google took forever to open source grpc, every time their engineers left to modernize some other tech company, Facebook or Twitter or whatever, they'd reimplement proto/stubby at their new gig. Because it's literally the only way to solve this problem.

So use whatever incarnation you like.. you have options. But json/http isn't one of them. The problem goes way deeper than serialization efficiency.

(edit: d'oh! Replied to the wrong comment. Aw well, the advice is still sound.)


It might depend a bit on how you scope it, too.

I once worked at a company where a team of 3 produced way more than 25 microservices. But the trick was, they were all running off the same binary, just with slightly different configurations. Doing it that way gave the ops team the ability to isolate different business processes that relied on that functionality, in order to limit the scale of outages. Canary releases, too.

It's 3 developers in charge of 25 different services all talking to each other over REST that sounds awful to me. What's that even getting you? Maybe if you're the kind of person who thinks that double-checking HTTP status codes and validating JSON is actually fun...


I've done that a couple of times. It's a good pattern!

I worked on an e-commerce site a decade ago where the process types were:

1. Customer-facing web app

2. CMS for merchandising staff

3. Scheduled jobs worker

4. Feed handler for inventory updates

5. Distributed lock manager

6. Distributed cache manager

We had two binary artifacts - one for the CMS, one for everything else - and they were all built from a single codebase. The CMS was different because we compiled in masses of third-party framework code for the CMS.

Each process type ran with different config which enabled and configured the relevant subsystems on as needed. I'm not sure to what extent we even really needed to do that: the scheduled jobs and inventory feed workers could safely have run the customer app as well, as long as the front-end proxies never routed traffic to them.


Looks like a service oriented architecture to me


It really depends upon what those 25 different services are. If they are trivial to separate sure. Like an image resizing microservice, an email sending microservice, and so on. I mean go wild, these are trivial. Coincidentally, when people like to talk about how easy microservices are, they love to talk about these trivial examples.

What isn't trivial is when someone decides to make an Order microservice and an Account microservice when there's a business rule where both accounts and orders can only co-exist. Good fucking luck with 3 developers, I'm pretty sure with a team of 3 in charge of 23 other microservices you aren't exhaustively testing inter-microservice race conditions.


Or better yet: a massive front end SPA talking to an array of stupid microservices that are not much more than tables with web accessible endpoints for CRUD and probably an authentication service thrown in for good measure. All of the business logic on the JS side of things. The worst of both worlds, now you have a monolith and a microservices based architecture with the advantages of neither.


I work on an application like that. Not 3:25 but not that far off either. I'm quite content with the situation.

The apps all handle a bespoke data connection, converting it into a standard model which they submit to our message broker. From then on our services are much larger and smaller in number. It's very write-once-run-forever, some of these have not been touched since their inception years ago, resulting in a decreased complexity and maintenance cost.

The trick is not having REST calls all over yours services. You're just building a distributed monolith at that point.


Last startup I've been part of, we've been having a bit of fun building our server architecture in Swift and so far the One-Binary-Many-Services model has been working out pretty well. You can run it all on single machine, you can have debug hooks, that make it seem like a Monolith, or scale it out if need be. When it comes down to it the Authentication Service really doesn't need to know about the Image Upload Service, and splitting it is all about defining good interfaces. Just need to put some effort in to keep your development environment sane.


I'm kind of dealing with your awful scenario right now. It is pretty bad. What happened was the department used to be a lot, lot larger, and people tended to only have to deal with 7 or 8 of them at a time (it was still excessive, I often had difficulty debugging and keeping things straight for tasks) but after two years layoffs, other employees quitting, and executives pulling our employees into other departments, we're a tiny shell of what we used to be, and we still have to manage all of those microservices, and it's so difficult.

I've been daydreaming about monoliths and will be asking at interviews for my next job hoping to find more simplified systems. I came from the game industry originally, where you only have one project for the game and one more for the webservice if it had one, and maybe a few others for tools that help support the game.


We recently gave a name to that One-Binary-Many-Services approach - Roles.

https://github.com/7mind/slides/raw/master/02-roles/target/r...


This wasn't actually that. All of them did the same job, just that one did it for widgets for the widget-handling team, and another did it for whatsits for the whatsit-handling team, and another did it for both widgets and whatsits for the reporting system, etc. etc.


Why does it happen? Consultants, buzzwords, $$


I’ve worked at 2 companies with monoliths that had great products and tremendous business success.

And 3 companies with micro service infrastructures that had lousy products and little business success.

Can’t totally blame microservices but I recall a distinctly slower and more complicated dev cycle.

These were mostly newer companies where micro services make even less sense and improving product and gaining users is king.


The definition of “micro” appears to be hugely variable! If you’d asked me I’d say that sure, my last team definitely built microservices. A team of around 10 engineers built and maintained something like 3 services for a product launch, each with a very different purpose, and over time we added a couple more. Three people maintaining 25 services sounds absolutely bonkers to me.


If your monolith goes unwieldy you have a problem with your code structure which microservices won't solve. As we all know, you need well isolated, modular code with well defined boundaries. You can achieve this just as well in a monolith (and you can also achieve totally spaghetti code between microservices).

Microservices is a deployment choice. It's the choice to talk between the isolated parts with RPC's instead of local function calls.

So are there no reasons to have multiple services? No there are reasons, but since it's about deployments, the reasons are related to deployment factors. E.g. if you have a subsystem that needs to run in a different environment, or a subsystem that has different performance/scalability requirements etc.


That's just "services", though, and it's been the way that people have been building software for a very long time. I can attest to have done this in 2007 at a large website, which was at least 7 years before the "microservices" hype picked up (https://trends.google.com/trends/explore?date=all&q=microser...). When people say "microservices" they're referring to the model of many more services than what you describe, and the associated infrastructure to manage them.


I also think designing in the microservices mindset (i.e. loose coupling, separable, dependency free architecture) is something which can be done on a continuum, and there's not a strict dichotomy between The Monolith and Microservices(tm).

Even if you're working on an early prototype which fits into a handful of source files, it can be useful to organize your application in terms of parallel, independent pieces long before it becomes necessary to enforce that separation on an infrastructure/dev-ops level.


You start with microservices when you realize that including the Elasticsearch API in your jar causes dependency conflicts that are not easy to resolve.


While there are cases where I think microservices make it easier to scale an application across multiple hosts, I don't understand the organizational benefits compared to just using modules/packages within a monolith. IMO a team that makes an organizational mess of a monolith and makes it grow unwieldy will likely repeat that mistake with a microservice oriented design.


And then you pray to whatever God you believe in that you happened to get those 10 abstractions just right!


Even then, that is what libraries are for.


I agree with the author to some extent.

The main thing, however, is many people think that, by breaking up their monolith into services, that they now have microservices. No, you don't. You have a distributed monolith.

Can you deploy services independently? No? You don't have microservices. Can you change one microservice data storage and deploy it just fine? If you are changing a table schema and you now have to deploy multiple services, they are not microservices.

So, you take a monolith, break it up, add a message broker, centralized logging, maybe deploy them on K8s, and then you achieve... nothing at all. At least, nothing that will help the business. Just more complexity and a lot more stuff that need to be managed and can go wrong.

And probably a much bigger footprint. Every stupid hello world app now wants 8GB of memory and its own DB for itself. So you added costs too. And accomplished nothing a CI/CD pipeline plus sane development and deployment practices wouldn't have achieved.

It is also sometimes used in lieu of team collaboration. Now everyone can code their own thing in their own language without talking to anyone else. Except collaboration is still needed, so you are accruing tech debt that you know nothing about. You can break interfaces and assumptions, where your monolith wouldn't even compile. And now no-one understands how the system works anymore.

Now, if you are designing a system using microservices properly, then it can be a dream to work on, and manage in production. But that requires good teamwork on each team and good collaboration between teams. You also need a different mindset.


Do you have a recommended way of handling transaction boundaries that span multiple services. Everyone always likes to outline how happy path works and when it comes to real world it basically comes down to well now you have eventually consistent distributed system that there is no general valid way to unroll a change to multiple services if one of the calls fails.


If several databases/services are so tied together that they need to be updated/rolled back at the same time, then they belong to the same service/database.

There will be times when you trade data consistency for performance/scalability, i.e. in a scenario where you are breaking user actions away from the main user service, if a user is deleted but deleting from user actions failed, you don't roll back the user delete. Either just let the invalid data sit in user actions database, or do separate clean-ups periodically.



"In order to be reliable, a service must atomically update its database and publish a message/event." so in other words to some transaction log so we just reinvented a distributed transactional database system with crapload of custom programming overhead. Whatever solution is described they all pretty much boil down to an equivalent of distributed eventually consistent database or distributed acid database but with a large number of database functionality pushed into the application.


So distributed acid db... Noone till now (from what i have seen) came up with a better solution than guys 50 years ago..


Well, we’ve gotten this far...


You can break up the monolith, use a message broker or even let the services communicate via http but you do not need K8s. Its pointless unless you want to orchestrate multiple vms/images and your infra scales to that of more than 10-15 servers/containers etc.


When I started programming professionally it was the era of "Object Oriented Design" will save us all. I worked on an e-commerce site that had a class hierarchy 18 levels deep just to render a product on a page. No one knew what all those levels were for, but it sure was complicated and slow as hell. The current obsession with microservices feels the same in many ways.

There appear to be exactly two reasons to use microservices:

1. Your company needs APIs to define responsibility over specific functionality. Usually happens when teams get big. 2. You have a set of functions that need specific hardware to scale. GPUs, huge memory, high performance local disk, etc. It might not make sense to scale as a monolith then.

One thing you sure don't get is performance. You're going to take an in-process shared-memory function call and turn it into a serialized network call and it'll be _faster_? That's crazy talk.

So why are we doing it?

1. Because we follow the lead of large tech companies because they have great engineers, but unfortunately they have very different problems then we do. 2. The average number of years of experience in the industry is pretty low. I've seen two of these kinds of cycles now and we just keep making the same mistakes over and over.

Anyway, I'm not sure who I'm writing this comment for, I guess myself! And please don't take this as criticism, I've made these exact mistakes before too. I just wish we as an industry had a deeper understanding of what's been done before and why it didn't work.


A great many of our problems in tech are the result of "...we follow the lead of large tech companies...but unfortunately they have very different problems than we do."

Imagine if we built single-family houses based on what made sense for skyscrapers. Or if we built subcompact cars based on a shrunk-down version of semi-tractor trailers. They would not be efficient, or even effective.

But, if your aspiration is to get a job at a skyscraper-builder, then it MIGHT be what makes sense to do. "Have you used appropriate-only-for-largest-websites technology X?" "Why yes I have." The same incentives probably apply to the tech management, btw. We have an incentives problem.


Its not about the age of the engineers but maturity. Some just dont care about quality of their work because they get paid either way. Look at Silicon Valley, "ageism" is real there. They need young devs with ideas and huge skill to bring them to life, to stay ahead of the competition. Most companies dont understand that and blindly try to copy that often because their management is not competent enough.

Reasons for current situation are plenty. World and ppl are complicated.


> They need young devs with ideas and huge skill to bring them to life

More likely young devs with naivety and thus motivation.


> young devs with ideas and huge skill

LOL :D You mean cheap devs who will work for half the price and will never raise their concerns and will gladly accept the stupidest and most menial tasks.


> One thing you sure don't get is performance.

You can optimise per use case. In the monolith everything has to work for every use case. In a service you might not care to write, you might not care if your writes are super async. This means you can start to take liberties with the back-end (e.g. denormalising where necessary) and you have room to breathe.


I'd like to point out that microservices are not always as cheap as you may think. In the AWS/Lambda case, what will probably bite you is the API Gateway costs. Sure they give you 1,000,000 calls for free, but it's $3.50 per million after that. That can get very expensive, very quickly. See this hacker news post from a couple years ago. The author's complaint is still valid: "The API gateway seems quite expensive to me. I guess it has its use cases and mine doesn't fit into it. I run a free API www.macvendors.com that handles around 225 million requests per month. It's super simple and has no authentiction or anything, but I'm also able to run it on a $20/m VPS. Looks like API gateway would be $750+data. Bummer because the ecosystem around it looks great. You certainly pay for it though!"

https://news.ycombinator.com/item?id=13418332


Worth saying: Now that ALBs support Lambda as a backend, reaching for APIG w/ a lambda proxy makes less sense, unless you're actually using a lot of the value-adds (like request validation/parsing and authn). Most setups of APIG+Lambda I've seen don't do this, and prefer to just Proxy it; use an ALB instead.

ALB pricing is a little strange thanks to the $5.76/mo/LCU cost and the differentiation between new connections and active connections. The days are LONG GONE when AWS just charged you for "how much you use", and many of their new products (Dynamo, Aurora Serverless, ALB) are moving toward a crazy "compute unit" architecture five abstraction layers behind units that make sense.

But it should be cheaper; back of the napkin math, 225M req/month is about 100RPS averaged, which can be met with maybe 5 LCUs on an ALB. So total cost would be somewhere in the ballpark of $60/month, plus the cost of lambda which would probably be around $100/month.

Is it cheaper than a VPS? Hell no. Serverless never is. But is it worth it? Depends on your business.


Right, there are a few use cases for Lambda that make lots of sense, and then some that don't. If you're not extracting any operational benefits or costs (think a request that needs to run 5m per hour) from the managed portion of Lambda then it's probably not for you.

The ALB point is very strong. APIGW can add lots of value with request response manipulation and the headaches of managing your own VPS, but you need to make sure that you don't just need a bare bones path -> lambda mapping, which is where the ALB can shine.


The $750/month is well worth it to organizations with billions in revenue, wishing to protect user data. Better to route all traffic through an API gateway, and exposing all of your micro services on the public internet.

Everyone has to communicate through the API gateway. Then, you get a single point where things are easily auditable.

It has a lot of benefits that apply to business use cases. Your free API may not have as strict requirements.


It sounds like you’re making an excuse why you’re overpaying for API Gateway yourself.

You can easily deploy Kong/Tyk these days for peanuts and have your own single point of entry, without AWS API Gateway’s insane pricing.


I’ve never used API gateway outside of quick prototype tests to access lambda. $750 per month doesn’t sound like a lot of money if you have 225 million requests per month. A free API is probably an exception but I do realize why that would too expensive for your use case.


225e6 requests/month is about 80 requests/second. That's a very low rate for a gateway.


You’re assuming an even distribution of requests, but that’s not usually the case. It could be several multiples of that at peak.


Probably cheaper to invoke the lambda functions from Cloudflare workers.


Recent encounter: 70+ microservices for a minor ecommerce application. Total madness and while I'm all for approaching things in a modular way if you really want to mimic Erlang/BEAM/OTP just switch platforms rather than to re-invent the wheel. In Erlang it would make perfect sense to have a small component be a service all by itself with a supervision tree to ensure that all components are up and running.


Erlang/OTP and Elixir on top of that, brings more than 30 years of experience of building Reliable, Scalable and High Available Systems.

It's difficult to fight this.

Somebody new to Erlang can get a feel of what Systems Architecture in Erlang really means from a great article by Fred Hebert:

"The Hitchhiker's Guide to the Unexpected" https://ferd.ca/the-hitchhiker-s-guide-to-the-unexpected.htm...


I'm always curious why the Actor concept isn't more widely used. Many platforms / languages have some form of it.


I have no idea either, though the simplicity may be off-putting and might make people feel stupid for making it so overly complex.

It seems like a near perfect fit for the web.


This is why WhatsApp managed a $19B exit with only 50 engineers, while so few others have managed it.


Apps like WhatsApp/Insta have relatively small feature-sets. I doubt you can pin their success on their tech-stack. It's more like they aggressively avoided feature creep.


I'm equally curious about flow-based programming. Treating things as filters has worked well for the Unix command line. There's no reason we can't have processes do sort, filter, merge, lookup, and such in well-defined ways and chain them together as needed for many kinds of applications. Although "network" and "port" mean something particular within FBP it doesn't at all mean the processes can't be on different machines and talking via TCP/IP.


There is an interesting equivalence between pure functional programming flow based programming and the actor model.

There is a lot of fancy theory to underpin this equivalence, the essence is that all of them revolve around (side effect free) transformation.


I don't know Erlang and I'm only a couple of months in to learning Elixir in my (limited) free time. Grated, it's not purely functional, but... I've been reflecting on this exact equivalence, as you put it, and am happy to hear that my inclination about the Actor model isn't unfounded.


Ugh, I'm trying to create something like this.

The nodes are fine and easy, but the orchestration / flow implementation is getting me stuck for > 1 month... :(


In Erlang it makes perfect sense to have a single counter variable be a gen_server by itself and be part of a supervision hierarchy. Deploying and versioning that counter separately will still correctly get laughed at.


I feel like this argument of Monolith vs Microservice is really a discussion about premature optimization. There is nothing wrong with starting out with a monolith with great design. Limiting the responsibility of the monolith is the key I believe to a maintainable piece of software. Should you need something or your business needs grow to be outside of that defined responsibility creating a new service should be discussed.

For example I have a service that hosts / streams videos. I have 1 service that handles all the metadata of the video. Handle users, discussions etc... one could even think of this as a monolith. Then video encoding piece started interfering with the metadata stuff so I decided it might be smart to separate the video encoding into its own service since it had different scaling requirements from the metadata server.

In that specific case it made a lot of sense to have 2 services I can justify it with the following reasons.

- Resource isolation is important to the performance of the application.

- having the ability to scale the encoder workers out horizontally makes sense.

So now it makes sense I’m managing 2 services.

There should be a lot of thought and reasoning behind doing engineering work. I think following trends is great for fashion products like jeans / shirts etc... but not for engineering.

If you are starting a project doing microservices chances are you are optimizing prematurely. That’s just my 2cent.


> It feels like we’re starting to pass the peak of the hype cycle of microservices

I feel like any article I see on microservices bemoans how terrible/unnecessary they are. If anything, we're in the monolith phase of the hype cycle =)

If you're moving to microservices primarily because you want serving path performance and reliability, you're doing it wrong. The reasons for microservices are organizational politics (and, if you're an individual or small company, you shouldn't have much politics), ease of builds, ease of deployments, and CI/CD.


It's like almost everything else a matter of balance. A monolith that cross-connects all kinds of stuff in ways will become an unmaintainable mess over time, a ridiculously large number of microservices will move the mess to the communications layer and will not solve the problem.

A couple of independent vertically integrated microservices (or simply: services, screw fashion) is all most companies and applications will ever need, the few that expand beyond that will likely need more architectural work before they can be safely deployed at scale.


Even at a very large company, it's probably too common to spin out some logic into its own microservice. Often times owning your own microservice is a great argument for a promo, even if the code could just as well be in an existing service. Companies would claim to discourage that tendency, but it's a false claim.

So that's an additional reason for microservice frameworks: a welfare safety net for software engineers.


There are also two purely engineering considerations: scalability and crash isolation.

Scalability -- for when your processes no longer fit on a single node, and you need to split into multiple services handling a subset of the load. This is rare, given that vendors will happily sell you a server with double-digit terabytes of ram.

Crash isolation -- for when you have some components with very complex failure recovery, where "just die and recover from a clean slate" is a good error handling policy. This approach can make debugging easy, and may make sense in a distributed system where you need to handle nodes going away at any time anyways, but it's not a decision to take lightly, especially since there will be constant pressure to "just handle that error, and don't exit in this case", which kills a lot of the simplicity that you gain.

Both are relatively rare.


Microservices are not good for scalability. You want data parallelism, like 20 webservers that all do the same thing. Not splitting your app into 20 microservices all doing something else.


Developers think of microservices this way, now. Managers still think of it as exciting.


Manager here. Don't think so! Do whatever is right for the team/company.


> Manager here.

I bet you still have a commit bit.


Yup. Does that mean I'm not the target audience for that comment? :D


You need to build a pretty big system for ease of builds, ease of deployments and CI/CD to become easier with microservices.


Yes. If you would balk at multiple SWE years worth of work to move to microservices because of all the opportunity costs for your business, microservices aren't for you.


100%


I've come to the conclusion that microservices work for large organizations where division of labor is important but for small development teams it actually makes things worse.

What once was just a function call now becomes an API call. And now you need to manage multiple CI/CD builds and scripts.

It adds a tremendous amount of overhead and there is less time spent delivering core value.

Serverless architectures and app platforms seems to correct a lot of this overhead and frustration while still providing most of the benefits of microservices.


I've never found the division of labor argument all that compelling. Using multiple services compels clean divisions and impenetrable abstractions, but shouldn't it be possible to achieve that within a single program? A language with strong support for information hiding should be able to enforce the restrictive interfaces that microservices compel, but without the complexity and overhead of going over the network.

If that's not possible, I'd take that as a sign that we need new languages and tools.


I'm on a small team working with microservices. I have different complaints than yours. The main issue I run into with microservices is I lose the benefit of my Go compiler. I don't like in dynamic languages because of all the runtime errors I run into. With microservices, even using a statically typed language becomes a nightmare of runtime errors.

If I change the type on a struct that i'm marshaling and unmarshaling between services, I can break my whole pipeline if I forget to update the type on each microservice. This feels like something that should be easy to catch with a compiler.


If your services need a shared, implicit understanding of types, you're not respecting the microservice boundaries. Each microservice needs to offer a contract describing what inputs it accepts and immediately reject a request that doesn't meet that contract. Then type mismatches becomes very obvious during development when you start getting 400s when testing against the DEV endpoint. Don't pass around opaque structs.


If you use code gen like gRPC or thrift, it is easy for a compiler to catch. Even if you're using microservices with a shared DTO library, its easy for the compiler to catch.


A monorepo can help with this.


Check out these two articles from Shopify on their Rails monolith: https://engineering.shopify.com/blogs/engineering/deconstruc...

https://engineering.shopify.com/blogs/engineering/e-commerce...

Specifically relevant to the discussion is this passage:

> However, if an application reaches a certain scale or the team building it reaches a certain scale, it will eventually outgrow monolithic architecture. This occurred at Shopify in 2016 and was evident by the constantly increasing challenge of building and testing new features. Specifically, a couple of things served as tripwires for us.

> The application was extremely fragile with new code having unexpected repercussions. Making a seemingly innocuous change could trigger a cascade of unrelated test failures. For example, if the code that calculates our shipping rate called into the code that calculates tax rates, then making changes to how we calculate tax rates could affect the outcome of shipping rate calculations, but it might not be obvious why. This was a result of high coupling and a lack of boundaries, which also resulted in tests that were difficult to write, and very slow to run on CI.

> Developing in Shopify required a lot of context to make seemingly simple changes. When new Shopifolk onboarded and got to know the codebase, the amount of information they needed to take in before becoming effective was massive. For example, a new developer who joined the shipping team should only need to understand the implementation of the shipping business logic before they can start building. However, the reality was that they would also need to understand how orders are created, how we process payments, and much more since everything was so intertwined. That’s too much knowledge for an individual to have to hold in their head just to ship their first feature. Complex monolithic applications result in steep learning curves.

> All of the issues we experienced were a direct result of a lack of boundaries between distinct functionality in our code. It was clear that we needed to decrease the coupling between different domains, but the question was how

I've tried a new approach at hackathons where I build a Rails monolith that calls serverless cloud functions. So collaborators can write cloud functions in their language of choice to implement functionality and the Rails monolith integrates their code into the main app. I wonder how this approach would fare for a medium sized codebase.


shopify's problem can be fixed without microservices by writing modular code. The monolith should be structured as a set of libraries. I find it so strange, the way these microservice debates always assume that any codebase running in a single process is necessarily spaghetti-structured. The microservice architecture seems to mainly function as a way to impose discipline on programmers who lack self-discipline.


That's the approach described in this article: https://medium.com/@dan_manges/the-modular-monolith-rails-ar...

Instead of an app directory you put all your code into gems and engines.

Shopify has taken the approach of siloing their monolith into smaller rails apps which is a similar approach to refactoring into rails engines.


    The microservice architecture seems to mainly function as a way
    to impose discipline on programmers who lack self-discipline.
Sadly we're discovering that while that's the goal, the actual result is frequently Distributed Spaghetti.


Oh dear, it seems I've lost my poor meatball.


Well seeing as Italian-Cuisine-Driven Development is so crushingly prevelant, it seems most average developers within one standard deviation of ability on the normal distribution lack either discipline, know-how, or time. Or any combination of the three. So it's hard for software not to be spaghetti unfortunately. That's the natural state it wants to be in over time.

Microservices won't change any of those variables. Nor will it change the normal distribution of them. So one thing is for sure. No matter what the architecture of paradigm is being used, we can on average expect average quality software. Most people on average don't gush about how amazingly clean their architecture is or how well defined their bounded contexts are. They tend to talk about spaghetti. I infer from that that on average it's spaghetti and theres a chance it might not be.


1400 people work on Visual Studio, no microservices possible.

Modular code


> All of the issues we experienced were a direct result of a lack of boundaries between distinct functionality in our code

This is the key lesson to learn: if you are struggling to have clear separation of responsibilities, you are going to have a bad time with either approach. To the extent that a replacement system ends up being better it's probably due to having been more conscious about that issue.


Microservices at least force people to draw a line in the sand between subsystems/services. How effective or useful the lines you draw are, that's up to the skill of the engineers building your stuff.

I'm not saying microservices are better, but people should really take more serious considerations between the boundaries between subsystems. Because it's so easy to create exceptions, and things end up infinitely more complex in the grand scheme of things.

Clear, well-defined boundaries matter. It's the only way a developer can focus on a small part of a problem, and become an expert at working on that subsystem without needing greater context.


These conversations always get scattered because people don't post the experience they have that forms their view.

Me: I've only worked on HUGE systems (government stuff and per-second, multi-currency, multi-region telephone billing) and on systems for employees with less than 100 people.

My take: Monolith or two systems if at all possible.

This is good: A Rails app that burps out HTML and some light JS.

This is also good: A Rails app that burps out JSON and an Ember app to sop it up and convert it to something useable. Maybe Ember Fastboot, if performance warrants the additional complexity.

This is hellish: Fifteen different services half of which talk to the same set of databases. Most of which are logging inconsistently, and none of which have baked in UUIDs into headers or anything else that could help trace problems through the app.

This is also hellish: A giant fucking mono-repo[0] with so many lines of code nobody can even build the thing locally anymore. You need to write your own version control software just to wrestle with the beast. You spend literally days to remove one inadvertently placed comma.

Sometimes you have to go to hell though. Which way depends on the problem and the team.

[0] Kinda sorta, maybe the iOS app is in something else. Oh and there's also the "open source" work, like protobuffs that "works" but has unreleased patches that actually fix the problems at scale, but are "too complicated" to work into the open source project.


I think for the very large tech companies that have hundreds to thousands of engineers, microservices can make sense as a way to delegate a resource (or a set of resources) to a small group of engineers. The issue is that a lot of smaller companies/engineers want to do things the way these large companies do without understanding why they're actually doing it. The onboarding costs as this post mentions is huge. An engineer at a small company likely needs to know how the entire app works and spreading that over many services can add to the cognitive load of engineering. The average web app just doesn't really benefit from the resource segregation imo.


>I’m sure our specs were good enough that APIs are clean and service failure is isolated and won’t impact others.

Surely if you're building microservices, this line of thinking would be a failure to stick to the design? If your failures aren't isolated and your APIs aren't well-made, you're just building a monolith with request glue instead of code glue.

I appreciate the point is more that this methodology is difficult to follow through on, but integration tests are a holdover - you can test at endpoints: you should be testing at endpoints! That's the benefit.


> If your failures aren't isolated and your APIs aren't well-made, you're just building a monolith with request glue instead of code glue.

That’s pretty much every single microservice architecture I’ve ever seen, and I’ve seen a lot of them :(


That's a common thing, trying to solve the problem by moving it from one place to another.


I've had this feeling in some place that 'SOA' is a bit of a dirty word because it connotes a certain style of systems architect, or working like you do in Java or enterprise-scale PHP.

Many monolithic apps would benefit from a refactoring towards that rather than distributing a call stack across the network. The microservices can come later on if there's a need for it. If nothing else, it'll present a clearer picture of how things fit together when you start enforcing boundaries.


SOA also brings up memories of soul destroying ESB's.

I would love a resurgence of discussions about services and how to best build and govern those without always resulting in a focus on the micro versions.

How are people building a modern IT landscape consisting of different services and system?


The problem with ESBs in my opinion is one of tight coupling; all of these distributed systems that know about and depend upon each other. The solution to this is to loosen coupling whilst formalising interfaces/contracts/schemas, with a design that enables versioning and mandates graceful evolution. The modern version of ESB is an event based (note: this does not necessarily and probably does not mean event sourced) architecture built upon a distributed, append-only log that is ideally fed directly from a database transaction log.


ESB is new to me but just casting a glance on it makes me think... Kafka.

You can't avoid the centralisation unless you want infinite repetition.

The human spine is composed of multiple vertebrae and forms a consistent network with the brain and CNS and the rest of your body. The spine itself is the centralism, no matter how much you separate the bones into vertebra.

A service bus is basically putting all of your eggs into one wire. Or so it seems... it's so easy to strawman yourself to microservices.


But conversely, how much of the purported benefits of microservices are really the benefits from having well-defined contracts between components? Are microservices mostly a forcing function for good architectural practices, that could be applied equally in a monolithic design (with internal component boundaries) with enough discipline?


I think that's a fair starting point, but once you start enforcing better boundaries, you are actually able to access the microservice benefits (i.e. independent failure, component-variant scaling, independent update deployments, etc) because there's no risk of failure from a boundary break - and those benefits are definitely inaccessible to even the best-designed monolith.


Can you explain what you mean by "request glue"?


I believe they meant network request glue


On my Samsung TV (and via casting) I have access to 6 streaming platforms: Netflix, HBO Nordic, Viaplay, TV 2 play, C-more (related to the French Canal+) and a local service for streaming managed by the national public library system (limited to loaning 3 movies per week)

Of those Netflix is famous for its complex distributed architecture and employs 100s (if not 1000s?) of the very best engineers in the world (at $400k+/year compensation). I haven't heard about ground-breaking architecture from the others and don't imagine they spend 10s of millions of $ every year on the software like Netflix does.

I'm not really seeing any difference in uptime or performance. In fact, if I want to stream a new movie, I will use Viaplay (I can rent new stuff there for $5), or the library streaming service (which has more interesting arthouse stuff).

So why is Netflix in any way a software success story, if competitors can do the same thing for 1/100th the cost?


I often go for weeks where HBO Now won't work all or at least much of the time. I try to watch a movie, it says an error occurred and gives me a trace ID. I contact support, they ask me to reboot my router. They have no idea what trace IDs are for. Could I reboot it again? HBO Now still doesn't support 4k. Netflix virtually never fails for me, is always streaming in high-quality 4k. Whatever they are doing, it is working and they are operating a scale much larger than those other players you mention.


I’ve only ever found HBO Go apps to be more reliable than Netflix. Netflix frequently takes forever for content to load, especially heavy menus, and Netflix does a poor job of remembering my place in an episode if I turn it off and switch devices. Additionally, Netflix aggressively blocks VPN traffic, even if I am a US-based customer using US-only VPN locations. Never had any of these problems with HBO apps.


Because scale is not trivial. Netflix offers more titles (I'm assuming) than all of those services combined, and more importantly, over many many times more simultaneous streams. Sure, there may be engineering effort being wasted in the ML-fueled-recommendation department, but their back-end is expensive for a reason.


If anything, the amount of titles on Netflix is risible. There are only so few TV shows with only a few episodes a year.

The little variety of content is a great advantage, it takes little storage and it's very cache friendly.

It's countless orders of magnitude below user generated content like youtube/dailymotion/vimeo. It shouldn't be much different than video archives from TV or large studios.


Not sure if the competitors have the same kind of scaling demands as Netflix does.

If we were to go `extreme` in regards to your comparison: why does Facebook need such a large infrastructure for messaging, if my monolithic Homebrew family-only messaging application has just as much uptime and performance, for 20$ a month?


I would say its impressive that Netflix (an international organization present in 190 countries) has been able to leverage its operational and development expertise to offer an experience on par with those programs that only need limited local (small nation rather than international) distribution.


Netflix has 15% of Internet traffic worldwide.


Netflix has 30% of US traffic and much much less in the rest of the world.


Other pay tens of millions to CDN like Akamai. Not to mention that many streaming services are affiliated to ISP, whose specialty is to distribute content.

Netflix has to be a bit more efficient on the tech, because they have lower revenues and they don't own the pipes.

Besides that, the war is about content rights, not distribution. Netflix can maintain an image to be the cool kid in town, unlike older companies that don't care about that.


Glad that you gave a shutout to the library!


I think that microservices are just a deployment model of the service boundary and there should not be really a distinction between whether something is deployed as a microservice or a monolith, because application should support both for the scenarios when it makes sense.

Consider the following API:

  UsersService:
   CreateUser
   GetUser

  AppCatalog:
   GetApp
   CreateApp
What if AppCatalog and UsersService implement both local version of the interface and GRPC one? Then the distinction whether it's a microservice vs a monolith goes away, it becomes a matter of whether they are deployed in a single linux process or across boundaries of processes/servers.

I have implemented this technique in teleport:

https://github.com/gravitational/teleport/tree/master/lib/se...

Integration test suite is run against RPC version and local version at the same time to make sure the contract remains the same:

https://github.com/gravitational/teleport/blob/master/lib/se...

A single teleport binary can be deployed on one server with all microservices, or multiple cluster scenarios.

where the binary is simply instantiated with different roles:

  auth_service:
    enabled: yes
  node_service:
    enabled: no

Is Teleport a monolith? Yes! Is it a micro-service app? Yes! I'm so happy that we don't have to think about this split any more.


The question is transaction boundaries try unrolling a change that had to touch state of several services and one of the requests failed


Right, because we write against DynamoDB/Etcd transactions were a non-option anyways, and we only have CompareAndSwap as a locking primitive

https://github.com/gravitational/teleport/blob/master/lib/ba...

In addition to that Golang's context

https://golang.org/pkg/context/

is used to broadcast the failure of a distributed operation and release associated resources with it



It's probably closer to multi-writes though, than transactions in a classic sense, but good improvement nevertheless.



I don’t think it is ok to try and make service boundaries transparent and swappable. A service speaking to another service has to know the cost and overhead of the call it is making as otherwise it can’t provide an efficient interface


I wish we could get past microservices as a buzzword. Defining a system architecture by its size is relatively meaningless.

Ultimately there are principles at play behind whether a service should have separate infrastructure than another service. If those principles aren't being critically applied then any decision will be a rough one to live with.


I think this section in the article sums up where most people's problems lie:

> So long for understanding our systems

You can't "do" microservices by just having some servers that talk to each other. You have to rebuild the tools that come naturally from monoliths. Can you get a complete set of logs from one logical request? Can you account for the time spent inside each service (and its dependants)? Can you run all your tests at every commit? Can you run a local copy of the production system? With monoliths, those come naturally and for free. log.Printf() prints to the same stderr for every action in the request. You can account for all the time spent inside each service because you only have one service. All your tests run at every commit because you only have one application. You can run locally because it's one program that you just run (and presumably your server is just a container like "FROM python; RUN myapp.py").

When you carelessly switch to microservices, you throw that all away. You can't skip the steps of bringing back infrastructure that you used to have for free. Your logs have to go through something like ELK. You need distributed tracing with Zipkin or Jaeger. You need an intelligent build system like Bazel. You will probably have to write some code to make local development enjoyable. And, new concerns (load balancing, configuration management, service discovery) come up, and you can't just ignore them.

Having said that, I don't think you can ever really get away from needing the tools that facilitate microservices. Even the simplest application from the "LAMP" days was split among multiple components often running on different machines. That hasn't changed. And it's likely that you talk to many services over the course of any given request -- you just didn't write them. "Microservices" is just where you write some of those services instead of downloading them off the Internet or paying a subscription fee for them.


This topic reminds me of Conway's law:

Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations.

— M. Conway

Microservices probably make sense for large companies which are essentially a lot of small actors who build up a big system. Medium and small organizations should probably stay away.

Or another way to think about it, choose a microservices architecture if you want to employ a lot of devs.


I am going to date myself as 'an old guy' here: I used to love building monolithic systems around Apache Tomcat. I would use background threads registered to a web app to perform background periodic processing and write the web interface using JSP (supports a fast edit/test loop). I would build an uber-JAR file that contained everything including a thin main method to start Tomcat with my app. Totally self contained (except for requiring a JDK or JRE install), easy on memory requirements, and very efficient. A bonus: if an app requires static files, they can be added to a JAR file and opened as resource streams so `everything` really is in one JAR file.

Contrast this to J2EE that I would only use reluctantly if a customer wanted to go that way.


To me personally, it is not monolith vs microservice that bothers me, but statefull vs stateless services.

If a service can't assume local state, it creates unnecessary design overhead. For example, you cannot achieve exactly-once semantics between two services without local-state. If you replace local-state with message-queues, you just turned 1-network-1-disk op into 5-network-3-disk op and introduced loads of other problems.


If you are relying on local state, you can never scale to more than one machine.


How do you think Google does?


By not relying on local state?


Azure has the affinity cookie, which redirects the user to the same instance if it's a webapp.


What do you think of Kafka Streams?


I don't know.

If you (or someone) could tell me number of network-hops and Disk-IOs involved in performing an RPC using Kafka streams, I will definitely take a look at them.


I think there's a middle ground, it's more a domain driven design centric view of the world. Each domain is a monolithic style application with services that run in their own processes and communicate via some type of messaging infrastructure e.g. postgres, redis, zeromq, etc. The critical aspect of this approach is well-defined message schemas and versioning. The services can be spun up with a Procfile or built into a container. As you move towards container based infrastructure, other services like instrumentation, monitoring, and aggregation of logs are required.


It seems like Erlang strikes the perfect balance between what people want from both worlds. Scalability and fault-tolerance, but also coherence and established ways of doing things.


It does, but it isn't quite as cool (or as good for your job security) to roll your own, preferably from the ground up without any libraries or other battle tested code.


The first part of the hype cycle is "I have a hammer and now everything is a nail". The second part of the hype cycle is "I need to hammer some nails but I'm tired of hearing about how great hammers are".


When all you have is a hammer you spend a lot of time on hacker news reading about everybody else's hammers


Which kneecap do you want to get shot in? I ask this question a lot.

Microservices are trading one sort of complexity (the ball of mud) for another (configuration). I've found that the win for microservices is largely about developer efficiency, not code performance or whatever. Keep the developers from constantly tripping over each other in large systems.


Too late. Every new person we hire has the best idea to break our system into tons of micro-services. It'll pretty much happen at this point. Can't fight the mob.


Everyone has an opinion; mine is around lines of code.

Do you (as in your entire company/maybe eng department) have less than 100k LOC? If yes, you should stay in a monolith (except for potentially breaking out very specific performance/storage use cases).

Do you have more than 100k LOC? You should start breaking things up so that a) teams can own their destiny and b) you can have a technology evolution story that is not "now we have a single 1 million LOC codebase and we can never rewrite it".

Evolving ~10-20 different ~20-50k LOC codebases is doable because of the enforced wire-call API boundaries; evolving 500k-2M LOC monoliths is not, unless maybe you're Google/Facebook and have their tooling and workforce.

Granted, 20-50k LOC per codebase is probably not "micro".


We've used a monolithic microservice architecture before and were happy enough about it. The application was basically structured in microservices, but developed in a single project (monorepo and all) and the build produced a single build artifact. At deployment time, configuration decided what set of services the monolith would boot and expose.

Probably not for everyone (i.e. polyglot is hardly possible and it takes a lot of discipline to avoid a hairy ball of interdependencies), but it scales in ops complexity from very small setups to large ones, when needed.


This sounds a lot like what traditional Unix apps would do with fork().


Yes, in the sense of busybox I would say.


A few things that are harder with Microservices. * A known good consistent state. How do you freeze and take a snapshot of a micro service in a distributed state? * Caching, if you use a monolith you could be access the L1-L3 CPU caches on the local node meaning a very fast access. Accessing local cache is 0.5-7 ns vs 500 000 ns for a network trip https://gist.github.com/jboner/2841832 * Tracing latency. In a monolith you can use performance tracing tools on a local process and get a good overview. In Micro services you need distributed tracing tools * More complex architecture with more moving parts which makes it harder to diagnose for errors. * Memory efficiency as programming language run times are loaded several times for the different micro services.

Good things about Micro services. * It allows for distributed teams, backend, frontend teams and to have a common interface json calls to communicate between the services. * You can replace a micro service with another micro service * It may be a good fit for startups that needs to rapidly prototype. That it is good for fast moving startups does not mean it is good for traditional enterprises.

We are likely beyond peak hype on the hype cycle for micro services. https://en.wikipedia.org/wiki/Hype_cycle


I've seen a lot of comments here about microservices.

At work we are transforming also, so I'm in the process of setting up a personal environment for it.

I'm also joining a Hackerspace and pitching for it next week ( hands-on learning).

About the architecture, not much made "sense" in practice untill I encountered Akka, which uses the Actor model for creating microservices.

It's seems like a much better approach then everything I learned elsewhere.

Does anyone already have experience with it? ( Ps. Akka.net exist also)


I use elixir, same thing. The erlang VM is very powerful and makes separation of concerns easy. Splitting an app apart is hard because you get boundaries wrong, but there is no way to scale without adding more complexity somewhere.


Perhaps its just that, as software developers, we eschew any form of design that would lead to a maintainable monolithic system (The No Big Upfront Design Movement may have caused us to throw the baby out with the bath water). Maybe we just don't yet have the tools and theory to yet to put together a complex system made up of many individual agents in any easy-to-do way (e.g. Microkernels vs Monolithic kernels)

Look, we live in a era where the fastest time to market is always going to be the way to go. Microservices are nice but they slow development down a great deal.

What we need is an easier, less subjective way to build software. I think DataFlow programming will become more popular since it is easy, scales well, and applicable to more domains than many would think.

A monolithic dataflow application has many of the advantages of micro-services and monoliths alike.

I also think the industry should probably start to shy away from OOP (especially since industry totally dumped OOD). If you go on github and find a random C program, then do the same for a random C++ program - I would bet you can wrap you head around the C far before you can even begin to understand the C++. How people can revolt against microservices and yet not question the same phenomena with respect to basic SP vs OOP is again baffling to me.

I think microservice adoption is a heavy-handed approach to modularization. I very much like Jackson Strucutred Programming, Dataflow Programming etc. Dataflow is actually applicable to many more domains than some think and are about as understandable and scales about as well, if not better than microservices.


More people should really do in-process services before doing microservices -- the monolith and repo are your unit of deployment and running the thing, but internally, the services are your way of organizing the code and separating responsibilities.

An RPC call just becomes a function call - later you can split a logical service into an actual external service should the need arise. It also makes identifying which services talk to one another as easy as using git grep...


> Most of our conversation focuses on scaling the database.

I think one of the emerging principles behind modern micro-service design is to break out your data model into separate services that hide the database. You can then publish data changes to an event stream. This can help avoid requests coming back to a database. I think this is a better approach compared to heavy caching (e.g. redis / memcache).

I definitely agree that micro-services aren't a silver bullet that solve every engineering problem that exists but the hyperbole of 150 micro-services is a straw man argument.

My main annoyances with Kubernetes/docker systems I have encountered is stability of the cluster and visibility into the health of pods. Both of these issues were the result of my org deciding to build our own Kubernetes from scratch and this has turned out to be a significant task.

If I was starting my own company and wanted to develop using micro-services I would probably use one of the existing off-the-shelf container cloud service providers (e.g. Amazon/Google/Microsoft). I think that is a better approach than "build a monolith then lift-and-shift into micro-services later".


The author is doing it wrong - they don’t need to run a local k8s cluster with 150 services - this is a monolith way and they should have stayed with mononlyth if they want to do this.

Microservices require quite a bit of dev setup to get it right but often it comes down to be able to run a service locally against a dev environment, that has all those 150 other microservicea already running.

Queues are setup to be able to route them to your local workstation, local ui should have ability to proxy to ui running in dev (so that you don’t run entire amazon.com or such locally), deployments to dev have to be all automated and largely lights out, and so on.... it takes a bit of time to get these dev things right, but it doesn’t require running entire environment locally just to write a few lines of code.

Debugging and logging/tracing are an issue - but these days there are some pretty good solutions to that too - Splunk works quite well, and saves a lot of time tracking issues down.


For tracing, I tried Jaeger recently and it looks promising! https://www.jaegertracing.io/


I think software engineering is inherently cyclical.

Microservices were originated by developers who were fed up with maintaining monoliths, and in the future the next generation of developers who grow up maintaining microservices will become fed up with them and move towards something more monolithic (they’ll probably have another term for it by then).


I'll be honest, I don't understand the difference between what defines a monolith vs. a microservice. My 'organization' is about 15 developers, and we all contribute to the same repo.

Visually the software we provide can be conceptually broken apart into three major sections, and share the same utility code (stuff like command line parsing, networking, environment stuff, data structures).

Certain sections are very deep technically, others are lightweight modules that serve as APIs to more complex code. Every 'service' can be imported by another 'service' because it's all just a Python module. Also, a lot of our 'services' are user facing, but perform a specialized task in an "assembly line" way. A user may run process A, which is a pre-requisite to process B, but may pass off the execution of process B to a co-worker.

Is this a microservice or a monolith?


Microservices are vertically integrated, they have their own endpoints, storage and logic and do not connect horizontally to other microservices.

A monolith does not have any such restrictions, data structures are shared and a hit on one endpoint can easily end up calling functions all over the codebase.


A monolith is a big, often stateful app. An insurance quotation web site, for example. A micro service is a discrete and often stateless service than can be re-used by both the quotation web site as well as an underwiting web site. A service that looks up financial advisor commission rates, for example. Another good use for a micro service is for logging.


I don't have a strong opinion on monoliths vs microservices, as long as you don't go overboard with splitting things leftpad-style, but I believe splittings VCS repositories result in a huge waste of time when making cross-micro-services API changes.

On the other hand, the king of the hill of VCS these days, git on GitHub, does not make it easy to have this kind of setup:

- it is not possible (as far as I know) to checkout a subdir of a git repository hosted on GitHub), which is annoying for deployment

- it becomes difficult to only follow the PRs your team is interested in, since you can't tell GitHub to only notify you of changes in the subdirs you are interested in.

What are your experiences on this? when you split a monolith into microservices, do you also split the VCS repository into as many repositories?


This is a never ending cycle


I feel we're about due for another round of

"It makes programming so easy that anyone could do it because it's basically like writing English!"


Thats just Robotic Process Automation. If you haven't seen it, google it :)


To me, the hardest part of software engineering is the domain understanding, not the engineering part.


I work on a project that is somewhere in the middle. We have one repo that builds some microservices. We deploy them like a monolith, though. We have absolutely no compatibility between microservices built from different versions of the repo, and we have some nice tooling to debug the communication.

And we have a little script that fires up a testable instance of the whole shebang, from scratch, and can even tear everything down afterwards. And, through the magic of config files and AF_UNIX, you can run more than one copy of this script from the same source tree at the same time!

(This means we can use protobuf without worrying about proper backwards/forwards compat. It’s delightful.)


I worked at a company where we did something similar to that once. It was a nice compromise.

It was a Rails monolith; one of the larger ones in the world to the best of our knowledge. We (long story greatly shortened) split it up into about ten separate Rails applications. Each had their own test suite, dependencies, etc.

However, they lived in a common monorepo and were deployed as a single monolith.

This retained some of the downsides of a Rails monolith -- for example each instance of the app was fat and consumed a lot of memory. However, the upside was that the pseudo-monolith had fairly clear internal boundaries and multiple dev teams could more easily work in parallel without stepping on eachothers' toes.


My current project does something similar. There's a single hierarchical, consistent, scalable database, and a library of distributed systems primitives that implement everything, from RPC to leader election to map/reduce, through database calls.

All other services are stateless. I just shoot the thing and redeploy, and it only costs me an acceptable few seconds of downtime.


Same sentiment here. Most clients I work for are small companies with 0 to 5 developers and in those cases I prefer to start out with a monolith so there is only one codebase and repo to coordinate and everyone is aware how the whole thing works.

One thing I enforce however is to have a clean separation of layers and concepts within that monolith (modules and package names) so that if the team grows and the need arises to break up into separate chunks most of the work is already done and the boundaries are already defined.

I try to stick with one repo for as long as possible. This makes things much easier for new developers to onboard and to coordinate or rollback changes.


Start monolith. Prove product. Refactor into microservices as necessary.


There were no silver bullets, there aren't and there won't be. IMHO, I'd bet you would never hear a construction contractor say "give me back my hammer". The value remains in the choice of the tools and methodology in order to solve a problem.

Of course the author's point of view is totally valid, and so the microservices trend is also valid, and so are solutions in-between. One size won't fit everyone and as with anything going blindly for any solution can cause trouble.


    IMHO, I'd bet you would never hear a construction contractor say "give me back my hammer".
I'd bet they'd say it if half the construction industry had decided that using wood was "not webscale" and switched to using carbon fiber for everything, even where it was inappropriate and made things difficult.


Every time I read a pro-monolith article it's just "oh you don't need a microservice arch, monoliths are simpler" and every time I read a microservice article it's "microservices are more scalable" and both claims sound valid to me.

Yet I never see anyone talking about how we could combine the two to get the best of both worlds. It's always just microservices vs monoliths. (Similar things are happening in the frontend community with JS vs. no-JS debates.)


I agree that when it comes to 'arrangement' aspects like this arguing one or the other misses the point and seems more like fetishism in the 'shamanic' sense. Engineering involves trade-offs and one may fit for one task or environment but not the other. It could be the proverbial square wheels that look horrible but actually work perfectly for its niche.

One would be rightfully considered batty for trying to do /everything/ recursively for the sake of it while dogmatically avoiding recursion by creating massively multi-dimensional arrays would also be considered insane.

(Ironically I must disagree with client-side JS as something to be avoided whenever possible but that is over concrete concerns of trust, bloat, and abuse where 'but we can't do that then' is greeted with 'mission accomplished'. If it is locked away server-side I particularly don't care if you use assembly code or a lolcat based language.)


I cannot comprehend how someone could believe monoliths are simpler. It sounds like someone is drastically confused about the difference in kind that exists between the inherent coupling of monolith / monorepo systems and the utterly superficial overhead of configuration and individual tooling of microservices / polyrepo.

Having worked on many examples of both Fortune 500 monoliths and start-up scale monoliths, I feel confident saying monoliths just fail, hands down, at all these scales.


I have worked in monoliths, implemented across several development sites, with 300 devs on average.

Monoliths only fail when architects don't have a clue about modular development and writing libraries.

Same architects will just design distributed spaghetti code, with increased complexity and maintenance costs.


Even good architects with good ideas about modularity will fail writing monoliths, because that whole approach to software is intrinsically antithetical to decoupling and modularity. It’s like asking a professional soccer player to play soccer on the bottom of a full swimming pool. Doesn’t matter how good they are because the ambient circumstances render the task untenable. It’s the same for good engineers asked to work in monolith / monorepo circumstances. Through outrageous overhead costs in terms of tooling and inflexibility, the best you can hope for is stabilizing a monster of coupled, indecipherable complexity, like in the case of Google’s codebase, and even that minimal level of organization is only accessible by throwing huge sums of money and tens of thousands of highly expensive engineers at it.


It is relatively easy to have teams writing modular code.

They just have to learn how to actually use and create libraries on their language of choice.

Each microservice is a plain dll/so/lib/jar/... maintained by a separate team.

No access to code from other teams, other than the produced library.

It isn't that hard to achieve.


Your comment makes it clear to me that you don’t understand microservices. The challenge is not in the organization of simple compilation or code units that produce libraries, not at all.

The challenge is that in reality you will always need distinct build tooling, distinct CI logic, distinct deployment tooling, distinct runtime environments & resources, etc., for almost all distinct services, as well as super easy support to add new services that rely on previously never used resources / languages / runtimes / whatever. This need happens whether you choose a monolith approach or microservice approach, but only the microservice approach can efficiently cope with it.

The monorepo/monolith approach can go one of two ways, both entirely untenable in average case scenarios: (a) extreme dictatorship mandates to enforce all the same tooling, languages and runtime possibilities for all services, or (b) an inordinate amount of tooling and overhead and huge support staff to facilitate flexibility in the monorepo / monolith services.

(a) fails immediately because you can’t innovate and end up with some horrible legacy system that can’t update to modern tooling or accomodate experimental, isolated new services to discover how to shift to new tooling or new capabilities. This does not happen with microservices, not even when they are implemented poorly.

(b) only works if you’re prepared to throw huge resources and headcount at the problem, which usually fails in most big orgs like banks, telcos, etc., and had only succeeded in super rare outlier cases like Google in the wild.


I have developed projects with SUN/RPC, PVM, CORBA, DCOM/MTS, EJB, Web Services, SOA, REST.

So I think I do have some experience regarding distributed computing.

And the best lesson is that I don't want to debug a problem in production in such systems full of spaghetti network calls, with possible network splits, network outage,...


Your comment about debugging is much, much more applicable to monolith services than microservices. Digging into the bowels of a monolith service to trace the path of a service call is brutal, while even for spaghetti code microservices you can rely on the hard boundary between services (even when the boundaries were drawn poorly or correspond to the wrong abstractions) as a definitive type of binary search, as well as a much more natural and composable boundary for automatically mocking calls in tests or during debugging when isolating in which component there is a problem.


With a modular monolith I need one debugger, probably something like trace points as well.

With microservices I need one debugger instance per microservice taking part on the request chain, or the vain hope that the developers actually remembered to log information that actually matters.


If I worked with you, I would give negative feedback regarding your approach to debugging. You don’t appear to be taking steps to isolate the problem, rather just lazily stepping through a debugger expecting it will magically reveal when a problem state has been entered.

In the monolith case, your debugger is likely to step into very low-level procedures defined far away in the source code, with no surrounding context to understand why or to know if sections of code can be categorically removed from the debugging because, as separated sub-components, they could be logically ruled out.

Instead you’ll have to set a watch point or something, run the whole system incredibly verbosely, trip the condition and then set a new watch point accordingly. Essentially doing serially what you could do in log(n) time with a moderately well-decoupled set of microservices.

You’d also have the added benefit that for sub-components you can logically rule out, you can mock them out in your debugging and inject specific test cases, skip slow-running processes, whatever, with the only mock system needed being a simple mock of an http/whatever request library. One simplistic type of mock works for all service boundaries.

To do the same in a monolith, you now have to write custom mocking components and custom logic to apply the mocks at the right places, coming close to doubling the amount of test / debugging tooling you need to write and maintain to achieve the same effect you can literally get for free with microservices (see e.g. requests-mock in Python).

And all this has nothing to do with whether the monolith is well-written or spaghetti code compared to the microservice implementation.


List of employers on my CV speaks for my approach to debugging.


Actually, people do think about combining both approaches, see the Roles concept - https://github.com/7mind/slides/raw/master/02-roles/target/r...


how can these types of discussions be held in the abstract? the number of components or services, micro or otherwise, should depend on the specific application needs.


Because patterns replace thinking in too many corners of our world


I’ve noticed a large difference in opinion from Eurocentric architecture to America-centric. The U.S. seems to favor ivory tower, RDBMS centric systems and Europe is headlong into domain-driven design, event storming, serverless, and event driven architectures.

Monolithic design is fine for simple systems, but as complexity and scale increase, so do the associated costs.

I’m currently using DDD, micro services, and public cloud because complex system are better served.


Hmmmmm, what makes you say "domain-driven design, event storming, serverless, and event driven architectures" is less "ivory tower"?

"ivory tower" to me means academic, theoretical, "interesting", "pure", vs on the other end of pragmatic, practical, get-it-done, whatever-works, maybe messy. (either end of the spectrum has plusses and minuses).

"DDD, event storming, event driven architectures" don't sound... not "ivory tower" to me. :) Then again, I am a U.S. developer!


It's just basic architecture, no? ( From Europe...)


I think many think an rdbms-centric design is just "basic architecture", and all that "event sourced" stuff is over-engineered buzzword complexity.

It might very well be _useful_, it may be something many more people oughta be doing if only they knew how valuable it was. Could be! But it certainly does not seem basic or simple to me. It seems, well, "ivory tower". And something with a learning curve. Not "basic" at all. (And certainly neither do microservices).

Do y'all in Europe learn "domain-driven design, event storming, serverless, and event driven architectures" in school or something? (I don't even totally know what all those things mean, enough to explain it to someone else).


I learned it after hours ( not at school) and almost everywhere best practises are applied.

Some SMB's don't know anything, but the developers take "pride" in their work i think.

Ugly source-codes are everywhere though.


And I picked it up in Accenture’s Technology Group, starting with the success of this project:

https://www.accenture.com/us-en/success-performance-manageme...


Ivory tower as opposed to collaborative. Maybe it’s the wrong comparison.

Product-oriented vs process-oriented maybe.


I'm still not sure which one you are suggesting is "product-oriented" and which one "process-oriented"!


Product would be sql server and some ivory tower framework vs process which would be DDD.


OK! Agreed on product vs process. I still don't think "ivory tower" means the same thing to you as it does to me, so be it!

When I look up "ivory tower" on google, google's supplied dictionary definition is "a state of privileged seclusion or separation from the facts and practicalities of the real world." I don't think that's how you're using it though? Which confused me. But ok!


I’ve used “ivory tower” to denote closed off, non-collaborative architects that have enough power to do things “they’re way”.


> Monolithic design is fine for simple systems

Most systems are "simple". Or mostly simple.

What's the saying? It should be as simpler as possible (but no simpler).


I mean, if you can engineer a simple system you're better off than making a complex system. I think as many mentioned the main advantage of microservices is that it (a) forces people to have boundaries (b) conceptually easier to scale (because the "bespoke" part of the architecture needs to only do simple things).


Are you sure the alignment is continental across the board? Are you talking a specific industry?


Just watched the YC Amazon CTO talk on Youtube. It seems Amazon has a service oriented architecture whereby each team run their own 'service' or monolith. I don't work at Amazon, so someone could probably correct me. Other teams could adopt that approach not the super microservices / containers that need a dozen things. Probably some of the microservices could be broken down into functions that run on lambda.


the current trend seem to be pointing towards a future where all the backends are replaced with hosted API services (AWS Lambda, Google Functions, Netlify CMS, and other emerging headless CMSs).

think that is all we really need. some kind of API-first platform that lets you run any server-side code coupled with a nice abstracted database layer, and an admin interface to go along with.

no one has it right yet though. but i think we'll be there soon.


The main thing this article demonstrates is that you shouldn't go into microservices without knowing the lessons learned at places that have been doing it for years. Or without knowing why you want microservices.

For us it was a subset of "Production-Ready Microservices" by Susan Fowler. (It was so comprehensive we didn't need all of the things the book suggests you implement).


There are ways to keep the benefits of microservices such as isolation, while avoiding distributed computing, for example, roles:

Slides - https://github.com/7mind/slides/raw/master/02-roles/target/r...


Breaking a monolith application to micro-services looks very enticing. Teams initially benefit from the rewrite and refactoring process. But in the long run it can run into same issues of monolith application and maybe more.

Issues like frequent releases, downtimes and breaking changes etc can be solved by writing tests, testable code, refactoring and keeping the code clean.


I've found monolith's have nearly none of the advantages that people complain about and nearly all the downsides. I've noticed that people who complain about monoliths often are actually complaining bout the tech to break up the monolith

Such as - docker - packages - serverless

I honestly think the problem is devs not taking the time to become comfortable with tools


Microservices is a fad and a poorly named one at that. SOLID principles and loose couple are a foundation for long term design.


Poorly named? I happen to think microservices succinctly describes what they are: small services each focused on a single task or area, and assembled together to form a whole system.


I think that's the point OP is trying to make - in real world "micro" services are not in many cases small.


I am very happy with my monolith. I've been watching the K8s craze with amusement.

I will be splitting off pieces of my monolith soon, but docker-compose is a very reasonable compromise for running stuff, and the pieces I'm splitting off are for aggregation and background computation, so not really micro-services at all.


I worked for a number of years on a large webapp. It talked to a couple of databases and used them as a bus. There were a number of other back end processes that read and wrote to the database. Not sexy, but solid.


There is a middle ground: the Modular Monolith - package-/domain-driven monolithic apps that can be split if the need arises.

I wrote about one approach to doing this in the world of PHP using Laravel (Lumen)

https://link.medium.com/VA2Vq6zV2U


the problem is not the division of a monolith into microservices but the over-engineering of those microservices.

No you dont need docker. My recent .net core project had 3 projects (FE - API - Dashboard). Each one was deployed with CI to their respective server but deployment, qa etc was done all outside of a docker env because there was nothing the env can alter. We knew we develop in windows 10 and deploy/qa in ubuntu xenial. the CI was configured around that and then send the dlls to the server and restart apache after deployment.

The only thing that I can see for the need of docker is if you want to include your database inside also, but we opted-in for an on cloud db (azure) and each service had its own.

Once again we go to the discussion in which the problem is not the technique (microservices) but the over-engineering of such solutions.


I absolutely love the way the Elixir/Erlang + OTP projects (even within frameworks like Phoenix) decouple code organization / administration from the software runtime itself.

You can have both ^^ (and every tradeoff whichever extreme you feel more comfortable veering towards).


My take on monoliths vs microservices here - https://blog.rootconf.in/will-the-real-micro-service-please-...


I knew it'd make a comeback!


> Setup went from intro chem to quantum mechanics

Sums up frontend web development nowadays.


With serverless, you can have a Monolith.

Frameworks like Serverless or AWS SAM allow you to create a backend where all functions reside in one repository but get deployed in a way that each of them scales independently.


unless you are > 100 devs a well modularized monolith with emphasis on well modularized (as if it's microservices inside a monolith) is the best option usually.


Is this the Cathedral vs Bazaar discussion?

Basically I am wary of having a package manager pull new stuff unless I pin the versions to what I personally looked at.


People create their opinion from what they read in blog posts like this rather than their own experience. Take the right tool for the job - over.


Microservices usually require data redundancy, which goes against the philosophy of the database that the author is affiliated with.


I wonder if this is a pointless ideology argument that isn't about micro-services vs monolith but actually about tooling. I work on a particularly nasty monolith and have similar complaints this person has about micro-services. On boarding is painful, logging is a mess. Performance testing specifically is a pain (even this monolith has various components and caches) Added to that we have recruitment issues because its not shiny despite the company culture being really sweet.


Some people, when confronted with a problem, think "I know, I'll use microservices." Now they have 501 problems.


So how do you scale your monolith? Just run more instances even when your few interesting routes are the primary bottlenecks?


Exactly. If the option is more servers on one hand, and servers plus k8s plus specialized skills plus additional deployment and development complexity on the other, I know which one I'd choose.


The one that gave you more job security?


Ah, a cynic. :)

Depends on my investment in the company and what the company rewards me for, to be honest.

Most times I like to work in companies where I'd be rewarded for choosing the best solution for the company, regardless of job security. For instance, I've actively fought against language creep at a company because it would end up siloing developers.

But I'm not naive. If I worked at a company that rewarded me for complicated architectures, I'd deliver complicated architectures.


on-boarding new devs from the bootcamps, transitioning them to AWS + Lambda is as if starting from square one in terms of what they were expecting to work on. Very much a challenge, especially related to getting them to think about how to leverage good CI/CD for the cloud. Still a lot of knowledge gaps out there


It's a corollary of Conway's Law that your services should be as granular as your product teams.


No. At the same time: 6R's. Do what makes sense and measures right, not what is hip in the moment.


Scaling the database link unfortunately 404s. Would love to read the accompanying blog post.


Thanks for the catch, should be updated now.


Works now, thanks!


Pretty bad article. Spelling mistakes, no references and broad statements like "For many junior engineers this is a burden that really is unnecessary complexity".

Microservices in a monorepo with a proper dev env and build pipelines is just as "simple" as a monolith. Simple in quotes because I have seen crazy, sprawling monoliths.


My personal list of overhyped technologies:

- TDD

- Microservices

- Single-Page-Applications

- Node.JS

They tends to be used even if there are better options in specific cases.


Microservices are resumé-oriented architecture, like using CORBA 25 years ago.


You could easily write the same article about single threaded programming.


I doubt if the author had '150 micro-services' in real life. A monolithic that could be separated into more than 100 micro-services is already too complicated like hell and engineers live with pains on it.


I've seen half that IRL recently, given my sample size I do not doubt there is an architectural whiz somewhere that has pushed it to double that.


I know of a team who has 150 micro services. It's probably more, I should count them.

Needless to say, it is a giant clusterfuck.


Leftpad as a service?


cough, cough, SAP


In that case I could actually see some valid reasons for it.


Microservices as just small monoliths


I agree. The popularity of Microservices stems from messy large systems. So why not just have messy small systems instead.

Why is that people believe they need a Microservice architecture in the first place? None of the benefits of Microservices are absent in a carefully designed monolith.

If we are not going to give up our frenetic rapid development practices then we just need tools that help us move fast while keeping code understandable. Maybe we just need higher level languages where the machine can just keep track of all the details from extremely high level specifications. Software is too hard for humans.


This phrase, slightly paraphrased, was part of what triggered me to found my startup. "I want my monolith back." It was even a slide in our first pitch deck.

So I empathize. I do get the motivation behind microservices (or other flavors of distributed system—I tend to use the microservice term a little loosely). Too many people/teams working on the same deployable eventually becomes a bottleneck for collaboration, builds and tests can take a long time for even small changes, governance and domain-separation becomes harder, and so forth. You'll also grow to have different SLOs and tolerances for different parts of your system. For example, logins should almost never fail, but background workers might slow down or fail without major fallout. Plus different services may have completely different scale/resource requirements.

Really, the question is: When do microservices become important for you (if ever)? When is it justifiable to do it presumptively, anticipating future growth? We all need to make those bets as best we can.

That said, I strongly believe that tooling can lower the baseline cost of splitting systems into microservices. That was one of our main motivations for starting garden.io—bringing the cost, complexity and experience of developing distributed backends to that level, and hopefully improving on it. We miss being able to build, test and run our service with a single command. We miss how easy it was to navigate our code in our IDE—it was all in the same graph! Our IDE could actually reason about our whole application. You didn’t have to read a README for every damn service to figure out how to build it, test it and run it—hoping that the doc was up to date. You could actually run the thing locally in most cases, and not have minikube et al. turning your laptop into a space heater.

I don’t want to plug too much here (we’ll do the Show HN thing before long), but we’re working on something relevant to the discussion. We want to provide a simple, modular way to get from a bunch of git repos to a running system, and build on that to improve the developer experience for distributed systems. With Garden, each part of your stack describes itself, and the tooling compiles those declarations into a stack graph, which is a validated DAG of all your build, test, bootstrap and deployment steps.

The lack of this sort of structure is imo a huge part of the problem with developing distributed systems. Relationships that, in monoliths, are implicit in the code itself, instead become scattered across READMEs, bash scripts, various disconnected tools, convoluted CI pipelines—and worse—people’s heads. We already know the benefits of declarative infrastructure, IaaC etc. Now it’s just a question of applying those ideas to development workflows.

With a stack graph in hand, you can really start chipping away at the cost and frustration of developing microservices, and distributed systems in general. Garden, for example, leverages the graph to act as a sort of incremental compiler for your whole system. You get a single high-level interface, a single command to build, deploy and test (in particular integration test), and it gets easier to reason about your whole stack.

Anyway. Sorry again about the plug, but I hope you find it relevant, if only at an abstract level. Garden itself is still a young project, and we’re just starting to capture some of the future possibilities of it, but I figure this is as good an opportunity as any to talk about what we’re thinking .)


I've built my personal side project as microservices. I started with an initial POC in Python and then I had a clear vision for what services to build.

https://github.com/insanitybit/grapl

> I’d have the readme on Github, and often in an hour or maybe a few I’d be up and running when I started on a new project.

I can deploy all of my services with one command. It's trivial - and I can often just deploy the small bit that I want to.

I don't use K8s or anything like that. Just AWS Lambdas and SQS based event triggers.

One thing I found was that by defining what a "service" was upfront, I made life a lot easier. I don't have snowflakes - everything uses the same service abstraction, with only one or two small caveats.

I don't imagine a Junior developer would have a hard time with this - I'd just show them the service abstraction (it exists in code using AWS-CDK)[0].

> This in contrast to my standard consolidated log, and lets not forget my interactive terminal/debugger for when I wanted to go step by step through the process.

It's true, distributed logging is inherently more complex. I haven't run into major issues with this myself. Correlation IDs go a really long way.

Due to serverless I can't just drop into a debugger though - that's annoying if you need to. But also, I've never needed to.

> But now to really test my service I have to bring up a complete working version of my application.

I have never seen this as necessary. You just mock out service dependencies like you would a DB or anything else. I don't see this as a meaningful regression tbh.

> That is probably a bit too much effort so we’re just going to test each piece in isolation, I’m sure our specs were good enough that APIs are clean and service failure is isolated and won’t impact others.

Honestly, enforcing failure isolation is trivial. Avoid synchronous communication like the plague. My services all communicate via async events - if a service fails the events just queue up. The interface is just a protobuf defined dataformat (which is, incidentally, one of the only pieces of shared code across the services).

Honestly, I didn't find the road to microservices particularly bumpy. I had to invest early on in ensuring I had deployment scripts and the ability to run local tests. That was about it.

I'm quite glad I started with microservices. I've been able to think about services in isolation, without ever worrying about accidental coupling or accidentally having shared state. Failure isolation and scale isolation are not small things that I'd be happy to throw away.

My project is very exploratory - things have evolved over time. Having boundaries has allowed me to isolate complexity and it's been extremely easy to rewrite small services as my requirements and vision change. I don't think this would have been easy in a monolith at all.

I think I'm likely going to combine two my microservices - I split up two areas early on, only to realize later that they're not truly isolated components. Merging microservices seems radically simpler than splitting them, so I'm unconcerned about this - I can put it off for a very long time and I still suspect it will be easy to merge. I intend to perform a rewrite of one of them before the merge anyways.

I've suffered quite a lot from distributed monolith setups. I'm not likely to jump into one again if I can help it.

[0] https://github.com/insanitybit/grapl/blob/master/grapl-cdk/i...


Grapl looks quite interesting. I'm looking for something similar for public cloud (e.g. cloudtrail+config+?? for building graph+events). Is there a general pattern you employ for creating the temporal relationship between events? e.g. word executing subprocess and then making a connection to some external service. Just timestamp them or is there something else?


I think what you're getting at is Grapl's identification process. It's timestamp based, primarily, at the moment, yes.

A bit of the algorithm is described here: https://insanitybit.github.io/2019/03/09/grapl

More specifically Grapl defines a type of identity called a Session - this is an ID that is valid for a time, such as a PID on every major OS.

Sessions are tracked or otherwise guessed based on logs, such as process creation or termination logs. Because Grapl assumes that logs will be dropped or come out of order/ extremely delayed it makes the effort to "guess" at identities. It's been quite accurate in my experience but the algorithm has many areas for improvement - it's a bit naive right now.

Happy to answer more questions about it though.

Based on what you seem to be interested I'd like to recommend CloudMapper by Scott Piper.

https://github.com/duo-labs/cloudmapper


The blog post is super helpful! I think the session concept is the thing I needed. Thank you!

I tried running cloudmapper but I think I would need to replace the backend with a graph database and scrap the UI parts. We've got hundreds of AWS accounts and I'm having trouble just getting it to process all the resources in one of them.


FWIW, Scott Piper, who builds CloudMapper, also consults.

Glad I could help.


I think there is a place for services right on the beginning if they are well defined pre existing services like an authentication or chat service. It is an easy way to add common functionalities and you don't have to maintain the service itself, just integrating it to the overall system. For the rest of the more domain specific stuff just build a monolith and extract services out of it when it feels right. They don't have to be "micro" though, just a service. It does require some discipline to keep modules as separated as possible so it will be easier to extract them a service later.


One thing to remember is micro-services aren't meant for small/medium size companies, it is meant for large companies, where a monolithic application can't sustain high loads.


Have you read my blog: Grey Goo as a Architecture- Monolith and Micro-Service-Swarm as artifical antagonism.


Microservices are a business organizational tool. They literally bring nothing to table from a technical standpoint.


What? This comment seems ridiculous to me. They aren't a panacea and aren't right in all circumstances, but they have plenty of technical advantages. You can write code for different services in different languages / on different stacks, prototype using a new language/technology/stack with a small piece of the overall application, develop and deploy in parallel more easily, if one component fails it's less likely to bring down the whole application, gives you more freedom to scale if certain components of an application require more resources or different types of resources than others....

That's off the top of my head. These all come with tradeoffs of course, but to say they bring nothing to the table is absurd.


> You can write code for different services in different languages

But wouldn't that mean that the services must have no code whatsoever in common? And in that case, why would they be part of a monolith in the first place?


Isolated horizontal scalability? Sure, microservices aren't the end all be all of architecture design but let's not act like they "bring nothing to the table" technically.


You can have a monolith that easily scales horizontally. See 12 factor/Heroku model. If you have a piece that requires more scalability you can run just that piece with a command flag/env var. You get the same effect with none of the overhead.


If you want the "simple" dev experience of a monolith, but the technical advantages (or just plain reality of your distributed systems) of services-based architectures, Tilt is a really great solution: https://tilt.dev/

It makes both development of services-based apps easier, and the feedback/debugging of those services. No more "which terminal window do I poke around in to find that error message" problem, for one.


> No more "which terminal window do I poke around in to find that error message" problem, for one.

What? Just throw everything in syslog/journal, then stream that to an aggregator like logstash. Now you can get all logs from one system with journalctl, and all logs for an environment from kibana.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: