People have been using microservices for decades. Think about the UNIX filesystem. Applications don't know anything about writing to disk blocks, the filesystem service handles that. And the tools that interact with the filesystems can stay very focused to the task at hand. cp only knows how to copy files; ls only knows how to list directories. If you want to add a new feature to ls, you need not worry about breaking cp.
On the "web application" side, we've also been doing the same thing since the LAMP days. Your web app has no understanding of how to efficiently store records; the database service handles that. Your web app has no idea how to efficiently maintain TCP connections from a variety of web browsers all using slightly different APIs; your frontend proxy / load balancer / web server handles that. All your app has to do is produce HTML. It's popular because it works great. You can change how the database is implemented and not break the TCP connection handling of the web server. And people are doing that all the time.
All "microservices" is is doing this to your own code. You can write something and be done with it, then move on to the next thing. This is the value -- loose coupling. Well-defined APIs and focused tests mean that you spend all your time and effort on one thing at a time, and then it's done. And it scales to larger organizations; while "service A" may not be done, it can be worked on in parallel with "service B".
That said, I certainly wouldn't take a single team of 4 and have them write 4 services that interact with each other concurrently. That isn't efficient because not enough exists for anyone to make progress on their individual task, unless it's very simple. But if you do them one at a time, you gradually build a very reliable and maintainable empire, getting more and more functionality with very little breakage.
The problem that people run into is that they act as the developers of these services without investing in the tooling needed to make this work. You need to have easy-to-use fakes for each service; so you don't have to start up the entire stack to test and play with a change in one service. You need to collect all the logs and store them where you can see all of them at once. You need monitoring so you can see what components aren't interacting correctly (though monoliths need this too). You need distributed tracing so you can get an idea of why a particular high-level request went wrong. All these things are available off the shelf for free and can be configured in a day or two (ELK, Jaeger, Prometheus, Grafana). (The other problem that people run into is bad API design. There is no hack that works around bad API design in microservices; your "// XXX: this is global for horrible reasons" simply isn't possible. You have to know what you want the API surface to look like, or it's going to be a disaster. It's just as much of a disaster in monoliths, though; this is how you get those untestable monstrosities that break randomly every release. Microservices make you fail now, monoliths make you fail later.)
> You can write something and be done with it, then move on to the next thing.
That's part of the problem, you really can't. Requirements change. You have to update that service, and hope it's backwards compatible because if not, have fun updating all the services that interact with it.
I don't think requirements change nearly as often as people think they do. Open up your repository and look at the oldest files... probably still running in production, and never needed any changes. (Additionally, a careful design will still be the right one even in the face of changing requirements.)
Making changes backwards compatible is fairly simple. If semantics change, give it a new name. If mandatory fields become optional, just ignore them. (This is why you use something like protocol buffers and not JSON. It's designed for the client and server operating off of different versions of the data structure.)
Having two ways to do something is always going to be a maintenance burden, and make your service harder to understand. It is not related to microservices or monoliths. The same problems exist in both cases, and the solution is always the same; decide whether maintaining two similar things is easier than refactoring your clients. In the case of internal services where you control all the clients, refactoring is easy. You just do it, then it's done. In the case of external services, refactoring is impossible. So you maintain all the versions forever, or risk your users getting upset.
On the "web application" side, we've also been doing the same thing since the LAMP days. Your web app has no understanding of how to efficiently store records; the database service handles that. Your web app has no idea how to efficiently maintain TCP connections from a variety of web browsers all using slightly different APIs; your frontend proxy / load balancer / web server handles that. All your app has to do is produce HTML. It's popular because it works great. You can change how the database is implemented and not break the TCP connection handling of the web server. And people are doing that all the time.
All "microservices" is is doing this to your own code. You can write something and be done with it, then move on to the next thing. This is the value -- loose coupling. Well-defined APIs and focused tests mean that you spend all your time and effort on one thing at a time, and then it's done. And it scales to larger organizations; while "service A" may not be done, it can be worked on in parallel with "service B".
That said, I certainly wouldn't take a single team of 4 and have them write 4 services that interact with each other concurrently. That isn't efficient because not enough exists for anyone to make progress on their individual task, unless it's very simple. But if you do them one at a time, you gradually build a very reliable and maintainable empire, getting more and more functionality with very little breakage.
The problem that people run into is that they act as the developers of these services without investing in the tooling needed to make this work. You need to have easy-to-use fakes for each service; so you don't have to start up the entire stack to test and play with a change in one service. You need to collect all the logs and store them where you can see all of them at once. You need monitoring so you can see what components aren't interacting correctly (though monoliths need this too). You need distributed tracing so you can get an idea of why a particular high-level request went wrong. All these things are available off the shelf for free and can be configured in a day or two (ELK, Jaeger, Prometheus, Grafana). (The other problem that people run into is bad API design. There is no hack that works around bad API design in microservices; your "// XXX: this is global for horrible reasons" simply isn't possible. You have to know what you want the API surface to look like, or it's going to be a disaster. It's just as much of a disaster in monoliths, though; this is how you get those untestable monstrosities that break randomly every release. Microservices make you fail now, monoliths make you fail later.)