Hacker News new | past | comments | ask | show | jobs | submit login
The End of Microservices (lightstep.com)
258 points by reimertz on July 21, 2016 | hide | past | favorite | 145 comments



I'll tell you the real reason behind microservices: developer fiefdoms. "Faux-Specialization". It allows developers to feel like they have control over certain pieces of infrastructure and run the gambit on their strategy for getting ever more increasing pieces of the pie.

It has nothing to do with building reliable software. You could just as easily build and deploy a single networked application (so called "monolith"), that is composed of many different libraries that have well defined interfaces which can be tested in isolation. In fact, that's how most non-web software is still written and done.

The real reason is that by having these microservices, it allows single developers or teams to own or control parts of the codebase and enforce their control via separate repo's, and when speaking runtime, via authentication: Sally can't see the code to Joe's service and Joe can't make requests to Sally's production instance of her service that gives a guess at how long the car has to arrive to pick poor end user Bob up.

I've seen this same thing play out countless times at large tech companies and startups alike. It has nothing to do with building scalable, or more maintainable, or more cleverly designed applications. If anything, it adds more complexity because now we need to do all kinds of data marshaling, error checking, monitoring, have more infrastructure for something that should have been done in shared memory/in-process to begin with. Not to mention all the issues and headaches caused by fan out of tons of API requests, complicated caching scenarios, etc. I've seen the horror of microservices architecture where no one person is responsible for the actual app, only their "service".

There are a few exceptions where its useful to scale out parts of a distributed application, but in 99% of my experience the services aren't a real distributed system anyway and are vaguely organized by function, developer interest, and yes, control.


>In fact, that's how most non-web software is still written and done.

I can even add that we only do web software, but we do them exactly that way.

>If anything, it adds more complexity because now we need to do all kinds of data marshaling, error checking, monitoring, have more infrastructure for something that should have been done in shared memory/in-process to begin with.

I couldn't agree more. Martin Fowler warned us a long time ago : "The first rule of distributed objects : don't distribute them".

>There are a few exceptions where its useful to scale out parts of a distributed application

Yes, and very very very few. As I always say : microservices are not an architecture, they are an optimization.


> I can even add that we only do web software, but we do them exactly that way.

I can even add that not only do we do them exactly that way, but it's many times faster and more reliable than microservices, for reasons I have written about before[1].

The reality is that microservices add a whole host of failure modes, for just one benefit over monoliths: They are easier to scale horizontally. If you don't absolutely need horizontal scalability, and not many companies do, then you'd be foolish to pay the large cost.

[1]: https://www.stavros.io/posts/microservices-cargo-cult/


> The reality is that microservices add a whole host of failure modes, for just one benefit over monoliths: They are easier to scale horizontally. If you don't absolutely need horizontal scalability, and not many companies do, then you'd be foolish to pay the large cost.

If you are writing and deploying standalone applications, I agree. Microservices do have another benefit, though: forcing the application to be modularized. There are better ways to do that, though, like having proper systems engineering lay out the application architecture.

If you are writing applications that need to be integrated with third-party applications or need to be configured with different functionality for different customers, microservices can still make sense. This is what has pushed my company in that direction. We currently have a set of monolithic applications with varying but overlapping functionality, and customers who want feature X from application Y and feature W from application Z, plus feature V that we are developing just for them. Our code has become a mess of curricular dependencies and duplicated code in order to meet these desires.

The microservice (maybe not micro, but definitely service) architecture we are moving to will allow us to compose applications with reusable building blocks more easily than trying to maintain different git branches and library versions. It also allows us to better manage the non-code resources we need to deploy as we have hundreds of gigs of data resources used by various libraries. Furthermore, we will be able to deploy applications with either our in-house GUI and command line tools, as well as provide the APIs necessary for our customers to integrate our applications into their own third-party frameworks.


> The microservice (maybe not micro, but definitely service) architecture we are moving to will allow us to compose applications with reusable building blocks more easily than trying to maintain different git branches and library versions.

Can you elaborate why? Presumably, with microservices, you're always running the latest version, whereas you're mentioning a need for "different branches and versions" in your comment.

Microservices don't force you to modularize, you can still make a mess by not cleanly separating things or by cutting across the wrong boundary. You modularized well because you created a clean, separated, thought-out architecture, not because of any intrinsic property of microservices.


> Can you elaborate why? Presumably, with microservices, you're always running the latest version, whereas you're mentioning a need for "different branches and versions" in your comment.

Not necessarily. Depends on the needs of the customer and what we deploy to them. In general, we try to deploy the most recent version of everything when we make a release for a particular customer.

The issue we have with our builds is dependency management. As a note below, that in itself is something that could be addressed by refactoring our projects[1]. Because of the way we need to deploy our software (shaded uberjars) even a well-structured dependency hierarchy would result in large data updates every time we had to update the monolithic application.

> Microservices don't force you to modularize, you can still make a mess by not cleanly separating things or by cutting across the wrong boundary. You modularized well because you created a clean, separated, thought-out architecture, not because of any intrinsic property of microservices.

They don't force you to modularize well. but they force you to think about how to break your application up. You still need the systems engineering to break it up well.

You're right that services aren't the only way or necessarily the best way to accomplish this. We could get the modularization just by refactoring the organization of out projects. In the absence of the requirements for third-party integration and an actual need for horizontal scalability, that is what I would have done. All-totaled, though, I think the service paradigm was the right way to go for us.

I need to note for context that the software we write and deliver needs to be delivered on physical disks (yes, we still literally ship software) and that deployment size is an issue. We also can't guarantee that we will have access to anything besides a Windows or Linux JVM, so our deployment needs to drag all of its supporting data along with it. being able to simply deploy a new version of a particular service makes updates much less painful.

[1] And actually using versions for our Maven artifacts, but that is a battle I lost a long time ago...


Thanks for the comments! +1 to "APIs necessary for our customers." I think every developer should have a customer-oriented mindset, even if their customers are just other developers in the same organization.


> I couldn't agree more. Martin Fowler warned us a long time ago : "The first rule of distributed objects : don't distribute them".

Funny that now the same person carries the flag with the symbol of microservices, which are seen as a way to "get rid of architects". That's exactly what Thoughtworks believes in - and you will understand more if you read the first chapters of the book "Building Microservices", or if you join their recent conferences. Just my 2 cents.


Not to take anything from Martin Fowler, but the "do not distribute (unless you absolutely have to)" rule originated at Sun much, much earlier. I have a hardcopy of the paper somewhere, it was a conclusion drawn from acknowledging the fallacies of distributed computing.

https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...


You say it as if it's a bad thing... Compared to the alternative, fiefdoms really are a wonderful thing!

I once work for a giant fortune 500 corporation. Our department had a good 300 developers, and had been writing code for years. They had built the whose system using the classic, box-box-cylinder architecture. There were hundreds of little top tier 'services', but in practice, they all shared the same database, had to be deployed pretty much at the same time, and version upgrades had to go in lockstep. Every so often, database changes were necessary, and the world would grind to a halt for months to make the system still work after the migration: It was awful.

On top of this, having everyone using one stack really meant that a council of elders got to make tech decisions, and everyone else just did what they were told. This built a giant hierarchy. People near the top would never leave, because nowhere else would give them that much power. Talented developers would take years to gain influence, so they often just left. What remained was developers with no ambitions other than getting a paycheck... it was horrible.

The alternative was to let people maintain their own stacks, as long as they provided interfaces that other people could call. By limiting how much code could talk to a database, you didn't need negotiations to change something: Teams made any change they wanted as long as they remained backwards compatible, and then had to lobby users to upgrade to newest API versions if they wanted to remove the backwards compatibility mess. It was awesome in comparison.

A gigantic place won't have those problems, because they can invest money on making whatever tech decisions they made tenable: PHP is slow? Let's build a new runtime and compiler, says Facebook! If you are tiny, you don't need any of this BS, because if your team of 8 engineers can't agree, your company will fail regardless. But when you have 200 engineers, it's giving people more control over a piece of the pie or bleeding talent.

The one thing you still need to do is make sure teams have the right size, and people have enough accountability, that the product still works. You also need silly amounts of data marshaling and error checking compared to the monolith. But the terror of a company that can't hire engineers because the only way to have any control over your daily life is to have been there for 5 years is just hard to compare. When people say they don't want to work big corporate gigs, what they really mean is that monoliths become soul sucking.

So yes, I give thumbs up to fiefdoms, just like I'd rather have inefficiency in a republic vs a theoretically efficient dictatorship.


I agree with you that it's not a bad thing - I think it's the main benefit of microservices for most larger organizations, the ability for different teams to control their own destiny and not be blocked on each other.

But there's now a whole bunch of new developers out there who are just starting out and read all about these microservices, and think that it's the way to write web backends, and that if you don't do it that way, you just don't know what you're doing. Even if you're a two person team - you must use microservices if you want to scale out in the future apparently. It's just unfortunate the amount of cargo culting going on.


Except it isn't just newbies. Devs with experience look for that holy grail of architecture. They sell it to management by saying the system will be able to change with any new requested feature with minimal effort.

Until it doesn't. And that change will come and it'll make your fancy architecture look like a tin shack instead of a stately manor.

These devs are loud. And management loves loud. Loud is confidence.

We need a Dilbert for developers. Truly.


I really value your contribution, thanks for sharing.

I give the proponents/consultants credit for proposing a practical and incremental solution to these people instead of just insulting them or saying "you're doing it wrong".

That being said, the issue I have with microservices is the fact that the below is repeatedly used as a core technical argument :

> They had built the whose system using the classic, box-box-cylinder architecture. There were hundreds of little top tier 'services', but in practice, they all shared the same database, had to be deployed pretty much at the same time, and version upgrades had to go in lockstep. Every so often, database changes were necessary, and the world would grind to a halt for months to make the system still work after the migration: It was awful.

This is not a feature of "monoliths", this is a feature of shortsighted design (at the beginning) and cowboy design (at the end).

Let's be clear : apart for load balancing for performance critical areas (which only exist in 20% of the applications, and only cover 1% of a typical such application - or 60% if you are Google, Facebook, or Amazon or the rest of the top 10), there's nothing that microservices do better than in-memory libraries. Also, centralizing everything as opposed to having processes run on various servers creates the issue of the single point of a failure. Whole new classes of similar issues appear. It basically is buying troubles.

Adding microservices will never be a neutral operation, and incurs a real cost. This cost is the cost of distributed objects which has been known for over 20 years and has been well-documented by our fathers.

Once the above has been dismantled, it then comes to : "yes but teams can develop in different languages", "yes but dependency management becomes easier". But once again, when it comes these interoperability considerations, a solution also exists and has been created about 15 years ago. This solution is called .NET (30+ languages compiling to a single Intermediate Language, COM components, first-class library support, etc...). Even the "yes but I need to communicate with non .NET in-memory applications" is very maturely addressed by the framework. (it was one of the first-use cases addressed)

I also read in this thread : "the ability of teams to control their own destiny and not be blocked by other teams". With libraries teams can choose the version that works best for them, and a well-design in-house library would provide the extension points (Dependency Injection, Dynamic Binding...) that would allow the clients (the other teams) to specialize the behavior.

So in a gist, my main reproach for this, is that if people put the same effort they put in learning microservices to learning system and component design, they wouldn't even be a need to talk about the former. My regret is that people will always choose the hype and the new. But hey, that's how the game works.


The discussion about monoliths vs. microservices reminds me about Knuth's 10+pages of Pascal code vs. McIlroy's 6-command shell pipeline http://www.leancrew.com/all-this/2011/12/more-shell-less-egg...



You could even use Java on JVM and let your ops team decide which OS should be used.

Although unless Oracle changes its way it seems we will soon be a point were Apache will port Tomcat etc to .Net and we can use the best language with the best servers from the best communities.


Well maybe the more important part is building "laboratories of democracy" [1] than a republic. ;) Give each part of the org the ability to experiment, see what works best, and spread those ideas around.

[1] https://en.wikipedia.org/wiki/Laboratories_of_democracy


Any organization that designs a system ... will inevitably produce a design whose structure is a copy of the organization's communication structure

— M. Conway


I've heard this said a few times, but never really felt it rung true to systems I've seen/designed. Anyone have any examples?


Damn, can I get a job where you work?


I think you've missed the point in several different ways here.

First, there's a difference between "I've seen it done like X" and "When done well, it's done like Y"

Too often we play this game where we talk, teach, and apply Y, but then in the real world it gets done like X. Turns out that X sucks a lot, so then we throw away Y.

Microservices may be done poorly in most real world applications. In fact it would surprise me if they weren't.

This article doesn't help much. Microservices are not just another version of SOA. It doesn't work that way. In SOA, you start with a general category of service, say "logging". You write it simple, yet broad, and it's supposed to handle all folks that need logging. In microservices, you're doing one tiny little thing, like "make a record of the errors of this process available on the web". The vast majority of times, if you define what you're doing narrowly enough? The O/S already does it. Whereas if you start broad? You're writing code.

Then you slowly and methodically expand on that mission a little bit at a time, refactoring your microservices as you go. It's both a different way of structuring apps from monolithic days and a different way of looking at constructing and maintaining apps. If you think of it as the same blob of binary bits broken into smaller pieces, you've missed it. Likewise if you think of it in terms of "services". The "micro" is the key word here, not the "services" part.

This actually requires a much heavier interaction between developers, not setting up fiefdoms. If done correctly, it pushes larger groups of developers across teams to work more tightly together. If it's doing something else? You're doing it wrong.


If you're starting "micro", why do you need the "services" part at all? Why not just make it a function call?

To use your example, if my task were "make a record of the errors of this process available on the web", my first thought would be to write to stderr, pipe stderr to a file, and then serve that from the webroot of some commodity webserver like Apache, 'python -m SimpleHTTPServer', or even S3. If the format is too hard to read, I'd probably write a quick JS viewer that can collapse stack traces, aggregate similar errors, etc, all done in the web browser. Or use cut/awk/sed to process it before dumping it in the webroot. It's only if the data gets big enough that it starts freezing the browser that I'd reach for ELK or something similar. I don't know anyone who would call this a "service" though; indeed, it seems the polar opposite of what most people talk about when they say "microservices".


It is a pure function call. There are no side effects, just a clear and simple input stream and a clear and simple output stream.

I agree with your characterization. All except maybe the "I'd probably write a quick JS viewer that can collapse stack traces" part.

Here's the thing: I'm so minimalist that I'm only going to add frameworks and libraries as a last result, if you stuck a gun to my head. I'm going to resist as much as I possibly can.

I love writing code. And every line of code I write has to be compiled, deployed, is prone to bugs, has version dependencies, and so on. Many times it adds unnecessary complexity to the solution. I love using cool frameworks and libraries. Same goes for them.

The magic here is that the vast majority of whatever you're doing? It's already been done. Use O/S code that's ran successfully for ten, twenty years. Write your own code only as a last result. Micrososervices shouldn't have more than 100 lines of code.

Yes, it's a function. Maybe 3 or 4 closely-related functions. If it gets past that level of complexity, refactor.

(The only thing I'd add here is that you'll end up factoring out common code as you go along. That code needs to be put into a shared corporate library, which also should be small, along with shared types)

When done well, you end up with the realization that you've been doing it wrong all along. When done poorly, you just make an even bigger mess than you did doing it the old way.

Here's one small example. I was writing a production project for some folks a few years ago. We needed some way of showing system status.

The old way could have used all sorts of things: logging systems, server metric systems, and so on. I would have probably done some research, downloaded a library or framework, set it up along the happy path, and made some really awesome html dashboards.

Then the library would change, or the metrics would be different, or the configuration would need tweaking, etc.

I figured out what we really needed to see was time/date stamps on certain files. So I pipe ls to a web file and chron it. Five minutes later it's working. It'll be working ten years from now. If we need something more complicated? We'll add it. One tiny little bit at a time.

I'm okay with calling that a microservice, since it's infrastructure code that needs to be deployed and maintained as part of the app. But it's really only a line of BASH.

Some people would say that what I did was completely trivial. These are people who do not understand what the hell is going on. That was a win. If I could have 5 or 10 microservices like that which provide business value? I'm freaking way ahead of the value curve as opposed to a similar team who's still slogging through 7,000 lines of mostly boilerplate code that does basically the same thing.


I'd agree philosophically that this is the right way to do it, but that's not what most people are speaking about when they say "microservices". They usually mean small independent servers communicating via RPC, each of which only does one thing. I'd say that this is pretty much the opposite of what you describe, since now each task needs the overhead of serialization, deserialization, marshalling, error-checking, process monitoring, etc.


There's quite a bit of variability in what people mean when they say "microservice" I've read quite a few articles over the last few years that describe things as I just did.

However, you are correct. Most people are unable to get the cruft out of their head, so when they talk about microservices, they mean something like "exactly the way we used to do things, only with a few differences"

I was talking to a team last year that wanted to do microservices. They had a framework, a lot of tools, and were busy doing research, most of which consisted of reading fanboy posts and walking through happy-path tutorials.

When I started listening to what they were talking about in terms of an architecture? Wow! Too much to go into here, but one of the things was a "microservice" that at the beginning of the pipeline cleaned all the data.

Think about that. One "service" which was responsible for knowing every data type, performing all safety checks, and ensuring that all of the other services could run correctly. All in one place. You screw that one up? The whole thing is broken. You couldn't ask for tighter coupling.

And this, they thought, was better than a monolithic app.

I'm a big fan, but micoservices are going to make a huge mess in the industry that it will take decades to clean up. That's because most people doing them will think of them just as you suggested.

(Having said that, the ones who don't are going to run over everybody else in terms of productivity. Good for them and the orgs they work for)


> I'm okay with calling that a microservice, since it's infrastructure code that needs to be deployed and maintained as part of the app. But it's really only a line of BASH.

I'm under the impression that most people wouldn't call that a microsevice. Moreover, isn't that solution completely dependent on the implementation of the code that is writing those certain files, without making this dependence explicit - and pretty difficult to test automatically?


Actually no. No more than CSV dependency prevents spreadsheets from working with one another.

I think the key thing here is your word "explicit". There are lots of ways to make things explicit. The concerns of coupling and cohesion don't go anywhere. There are other ways to address them.

Testing is actually significantly easier, not more difficult.


I'll take your words about testability and maintainability, but still, I'm not convinced this would be be considered a microservice by most proponents of the term:

> Like in SOA, services in a microservice architecture[1] are processes that communicate with each other over the network in order to fulfill a goal. Also, like in SOA, these services use technology agnostic protocols.[2]

https://en.wikipedia.org/wiki/Microservices


> Micrososervices shouldn't have more than 100 lines of code.

I'd really like to see a non-trivial example of this because it's sounds ludicrous on its face


If you read my top-level comment you'll see that I agree with you 90%, but careful not to throw the baby out with the bathwater.

There are legitimate reasons why having those fiefdoms is beneficial despite the overhead. Especially in a SaaS world where you are not distributing binaries. Being able to use different languages for different purposes, and to deploy them to different hardware can be quite a bit more efficient in terms of hardware costs at scale. Also there is something to be said for "you write it, you run it" as it makes developers more careful and decreases finger-pointing. Again, don't take this as a rebuttal, the overhead is potentially a lot, but if clean interfaces can be determined and makes sense from a high-level business perspective then there can be a net gain.


When you have a large service, you can end up with a lot of coupling. Meaning that it is hard to have teams that specialize in specific parts of the application, you need a huge team that knows everything about everything and that doesn't scale as your code and company grows. And you can see the effects of that in reduced productivity and technical debt beyond belief.

Even if you hire high-end talent, they will be forced to worship people that have the knowledge about how all the bloated jenga-tower mess works.

And that's only the people problem, let's not even discuss how you build, deploy, test things in a reliable way without taking days to produce a working build.

That, and the fact that your quality of life will suck, since you won't be in control of such mess.

With microservices you have one service that does a limited set of things, that is easy to monitor, maintain, test, deploy and even entirely replace if necessary.


I agree about fiefdoms, and want to address a particular aspect of that pattern, somewhat complementary to yours, that I've seen at least in the (very large) organization that I'm a member of.

Specifically, like a microservice allows encapsulation of functionality, it also allows encapsulation of blame[1]. In a monolith (from whence the org I'm in came), a build failure or test regression could be caused by any number of failures across any number of horizontal bands in the organization. Oh, the build automation crashed because the build team updated to the latest version of Java but the build didn't. Oh, the UI Filters stopped working because the API team changed something without deprecating. It meant that development, in spite of agile efforts, still had a tick-tock cadence, where breaks halted work and tracking down the responsible parties and getting things fixed might take time (a lot of areas with deep specialties required to understand why something might be wrong). This also meant, because of the way the organization was structured and the way the build was structured, that "pressure" was directed along very hierarchical routes. Managers saw bugs from customers and pressured testers and dev-ops people who maintained automation to investigate causes and transfer responsiblity to developers who might be able to actually fix the problems.

As we've been decomposing into microservices, and likewise aligning along feature teams the blame gets allocated at API/service interfaces[2] instead of top-down. Since the build, deployment, uptime, and algorithmic functionality of each service is theoretically the domain of a single team, the blame-flow is more distributed and simple. An algorithmic bug, a build bug, and an availability issue are all addressed the same way: report the issue to the team responsible for that service, and let them work it out.

I'm not advocating that either way is better. There were nice aspects about a single-location debug tree in the monolith. I've seen teams that have become experts at deflecting blame and thus slow down the entire broader effort. And I know I'm possibly conflating two paradigms inappropriately (Feature Teams and Microservices). Just a notable pattern, to my eyes.

---

[1] I don't necessarily mean 'blame' here in a pejorative sense. Perhaps 'responsibility' would be a more neutral term.

[2] Steve Yegge describes to this being a top priority during Amazon's service/platform decomposition.


> The real reason is that by having these microservices, it allows single developers or teams to own or control parts of the codebase [...] Sally can't see the code to Joe's service and Joe can't make requests to Sally's production instance...

> Specifically, like a microservice allows encapsulation of functionality, it also allows encapsulation of blame

I think, the example of Joe's and Sally in parent [1] and your comment, makes the point in micro-services. During various Web Services integrations developers not only may blame specific part of system as faulty but also helps follow rule 'do not test your own code'. I remember few situation in pre-microservices era when during developing new API wrapper developers discovered a flaw in the remote web service (e.g. parameter doesn't change state) non-documented error code or raw Java exception as response. In these cases, owners of the API are are usually notified about issue so we may expect that will be fixed.

I agree with you that is hard way to advocate either way is better. It is worth noting that creating a micro services leads to problem which may end with _duplicating_ (creating monolithic) code to provide possibility to communicate between the hosts (imagine mesh network).

[1]: https://news.ycombinator.com/item?id=12134648


It happens for another reason too: everyone likes to flatter themselves that they're top-tier big shots, that they're "web scale," that they have the same scaling problems and uptime problems that Google and Facebook have, and thus they think they need to adopt the same architecture patterns that Google and Facebook use.

Someone builds a monolith, it gets some traction, and suddenly the server bill gets into five, six figures a month. Management have never been management before, and have never paid anything more than a few hundred dollars a month for servers, so their new cloud bill is mind blowing to them. They're spending as much money as a house costs every month on renting servers. They imagine they're hemmoraging cash right and left, they freak out and order engineering to rearchitect everything around a Service Oriented Architecture, because clearly it's time for the big boy toys. Google does it that way, so clearly that's what one does when one is at scale, when one is successful and has lots of paying customers, right?

99.9% would be better off financially and technically just rewriting a V2 of the monolith using lessons learned from V1, but that's perceived as outmoded, old-world thinking, because it's not what Google does.


This comment is made all the more humorous by the fact that Google doesn't actually do "microservices" internally. They have no qualms about building giant, monolithic binaries for things, and split things out based on the need, not some blind architectural fetish. The reason being, they don't like to be woken up by their pagers.


Microservices are for organizations that can't seem to enforce encapsulation at the object level.

Devs not creating good interfaces in OO design? Stick them on a different server, that ought to show them. Of course now your spaghettified object hierarchies are spaghettified api call hierarchies.


Services can be used and are used when a developer wants to try something fancy in an otherwise boring, simple tech stack. No one says that, it's usually covered as 'best tool for the job' but oh that can be very dangerous.


I do agree that people tend to over-complicate their architectures initially, this is premature optimization of another form. That being said:

* Microservices don't need to be written in the host language, nor do you need to deal with creating wrappers. Granted interacting with kafka/ampq isn't frictionless.

* Microservices can be upgraded or scaled out individually without touching the larger application as a whole.

* Microservice calls can be deferred and scheduled easily.


" in 99% of my experience the services aren't a real distributed system anyway and are vaguely organized by function, developer interest, and yes, control."

But, that's kind of the point. They're a networked system of interacting processes, but not a distributed system. A distributed system generally is a tightly coupled and coordinated set of services running on many machines for purposes of resilience or scale-out. Microservices are more for organizational scale-out. A microservices might BE a distributed system, but several interacting microservices probably isn't one (pedantically speaking).

Microservices are an evolution of how large-scale integrated systems have played out as a pendulum swing between tightly coupled systems and loosely coupled systems.

After the mainframe, into the client-server era, most non-web software in a large organization (eg. most IT back offices dating back to the 90s) were NOT built as a single application, it was built as a set of independent applications that were coordinated by something: a shared database, data extracts, message queues, transaction processors and RPCs.

Data extracts were the ultimate in loose coupling. You got an update every night from other databases, and could do whatever you wanted with the data. But of course, this led to data quality problems, when customer records or inventory couldn't reconcile.

So we moved to "a service owns its data" type situations, and wrapped them in message queues or RPCs. This is basically how something like SAP works - it's a set of modules with their own schemas that are tightly coupled through a set of RPCs and messages.

But this eventually had none of the benefits of networked modularity - everything was so coupled and intertwined, it was a mess to change and upgrade - you had to do the whole thing at once (which is why SAP upgrades are so notorious).

What makes microservices different is that we've evolved them to have truly independent interface from implementation by having the implementation lifecycle of a module be tied to the server down to the metal, so it can be completely autonomous from other services. Therefore I can upgrade/replace/scale/move these runtimes whereever I want, whenever I want. I can also place proxy or gateway intermediaries between them more easily. This is "SOA done well", in effect.

We've also evolved our practice of service interface design beyond rigid IDLs and WSDLs and schemas into more permissive, extensible, and better documented APIs or Events, along with easier versioning, but the state of the art still has a long way to go there.

Not a panacea, but on the flip side, I've rarely seen a maintainable, scalable, and easy to change networked monolith. They do exist, but required the controlling designers to be deeply talented with full control over architecture decisions, and they didn't get into analysis paralysis. But even they eventually couldn't keep up with changes, so they're breaking their monoliths up today, IME.


in 99% of my experience the services aren't a real distributed system anyway

What is a real distributed system?


To work that out you need to go back when the Galaxy was young, men were real men, women were real women, and small brown furry distributed systems from Alpha Centauri were real small brown furry distributed systems from Alpha Centauri....

Sorry...


A distributed system is one that can fail in new and novel ways


I see several people criticize microservices here. We've been doing it for about 6 years and are extremely happy with it.

A core principle which a lot of people and articles ignore, though, is reusability. I bring this up on HN every time there's a discussion about microservices, yet I've never seen any discussion about it.

Essentially, you build out the backend to act as a library for your front end. So we have login, storage, analytics, reporting, logging, data integrations, various forms of messaging, business-structural stuff, etc. etc. all bundled up as separate services. The front ends just use these services to build a coherent product. The front end is the monolith: The microservices are the cloud.

For example, let's say I wanted to create a new product called "Hacker News". I'd use our storage service to store links and comments. I'd use the login service to let users log in. I'd use our messaging service to send notifications about things like verifying your email or send password resets. I'd use our analytics backend to emit events for reporting. And so on. I could easily build the whole thing without writing a single line of backend code, and without setting up a new cluster, because every backend service has been designed with multitenancy from the start.

This ability to piggyback on a platform of services where I think the real utility of microservices lies. Everything else — fine-grained scalability, smaller surface for tests, language-independence, swappable implementations, etc. etc. — are secondary to that.


Separate pieces of functionality into small reusable libraries? Great. Enforce separation of each library's internals from each other one? Great. Ensure each database is owned by one and only one service? Great.

Invoke those services via RPC-over-HTTP? Why???


>Invoke those services via RPC-over-HTTP? Why???

I can think of a few off the top of my head:

1. Being able to scale individual, smaller components rather than one giant application

2. Being able to build-and-deploy any one module much faster than one monolithic bundle of local dependencies

3. Universal interface with language-agnostic applications. Do you need to expose big data statistical analytics, but can't find data engineers that are comfortable with Java/Node.js/Go? Doesn't matter, hire the best ones and let them use Python/R/Clojure. HTTP doesn't care.


> 1. Being able to scale individual, smaller components rather than one giant application

Why does the size of the application matter? Sure it'll take up a little more disk space, maybe even some memory, but probably less than the overhead of splitting into components. Two copies of a monolith wastes less RAM etc. than five components of which you deploy two copies of one.

> 2. Being able to build-and-deploy any one module much faster than one monolithic bundle of local dependencies

Maybe, but first see if you can make build-and-deploy of the whole thing fast. Remember that microservices will slow you down a lot in the case where you do need to spin up multiple components (e.g. every time you spin up a dev instance).

> 3. Universal interface with language-agnostic applications. Do you need to expose big data statistical analytics, but can't find data engineers that are comfortable with Java/Node.js/Go? Doesn't matter, hire the best ones and let them use Python/R/Clojure. HTTP doesn't care.

When you have an actual use case for a HTTP interface then of course you should expose one. But most of the time, YAGNI.


I described some of the benefits I am currently experiencing from using microservices where we previously had a monolith.

Spinning up multiple components generally takes a few seconds once the image is built, as it is not much more involved than spinning up a new docker container of an image that is already pulled. Each image might take a couple of minutes to build, but because there are no hard dependencies between images (unlike libraries in a monolith) any amount of them can be building and deploying in parallel.

It doesn't really matter how fast the "whole thing" can be built-and-deployed before, because in a monolith, you need every developer building and deploying their own library versions as well as the monolith. Meaning, not only do your changes to production have to wait in the queue for everyone else's, but you unintentionally pick up work from other folks along the way. That road leads down to the environment you describe, where every developer needs at some point to spin up their own dev instance and live on their own branch just to be able to test in isolation, resulting in an organization that requires merging, building, and deploying several times along the way in different environments before the change finally reaches production.

I also described a perfect use case of an HTTP interface (or RPC, or whatever language agnostic interface you want to use, it doesn't matter to me and rarely matters in the real world). Data science work is often in a different language space than web work. I'm not going to bundle native Python/R/Clojure dependencies with my node.js apps. It's too much work to maintain those bindings between runtime versions and app versions. I'm not going to force Data scientists to use JS because the library ecosystem is targeted toward completely different use cases. With HTTP or RPC or whatever transport, each team can use the best tools for the job.


> I described some of the benefits I am currently experiencing from using microservices where we previously had a monolith.

Sure - I do appreciate it. It's just that my experience in a company that used both was that the microservices weren't worth it - more generally I think a lot of people leap on microservices without thinking about other ways to achieve what they want - so I want to fully work through the options.

> Spinning up multiple components generally takes a few seconds once the image is built, as it is not much more involved than spinning up a new docker container of an image that is already pulled. Each image might take a couple of minutes to build, but because there are no hard dependencies between images (unlike libraries in a monolith) any amount of them can be building and deploying in parallel.

You can do a parallelized build for a monolith too - each component can build in parallel. (Assembling the final image obv. has to be done at the end, but it's a small part of the build time).

> It doesn't really matter how fast the "whole thing" can be built-and-deployed before, because in a monolith, you need every developer building and deploying their own library versions as well as the monolith. Meaning, not only do your changes to production have to wait in the queue for everyone else's, but you unintentionally pick up work from other folks along the way. That road leads down to the environment you describe, where every developer needs at some point to spin up their own dev instance and live on their own branch just to be able to test in isolation, resulting in an organization that requires merging, building, and deploying several times along the way in different environments before the change finally reaches production.

If your team structure is such that you need individually versioned components (which has its pluses and minuses), you can do that while still having a single final deployable.

> I also described a perfect use case of an HTTP interface (or RPC, or whatever language agnostic interface you want to use, it doesn't matter to me and rarely matters in the real world). Data science work is often in a different language space than web work. I'm not going to bundle native Python/R/Clojure dependencies with my node.js apps. It's too much work to maintain those bindings between runtime versions and app versions. I'm not going to force Data scientists to use JS because the library ecosystem is targeted toward completely different use cases. With HTTP or RPC or whatever transport, each team can use the best tools for the job.

I think it's worth mentioning a lot of people jump to HTTP without thinking, when actually something like Thrift is much better suited to an rpc-like use case.

I've actually come to think that the best languages are general-purpose languages and you should just use one language (Scala) for everything. I accept that that's a minority viewpoint though.


> I've actually come to think that the best languages are general-purpose languages and you should just use one language (Scala) for everything.

I disagree very strongly with this. My original background is in scientific programming, which I learned primarily during my PhD. Since then I've been working in, and now leading, teams comprised of both data scientists (principally from a scientific or mathematics background) and data engineers (primarily from a software engineering background). I can tell you from experience that trying to do everything in one language works extremely poorly in practice.

Scala is a great example. People who focus primarily on scaling out relatively simple data processing systems, perhaps with some machine learning thrown in, are big fans of scala. Any scientific programmer that I know (note that machine learning, the most popular kind of scientific programming on HN, is only one very tiny part of the field), including the data scientists in my team, simply cannot do their jobs with scala.

The typical patterns scala encourages are fundamentally unsuited to advanced scientific programming, which is why almost nobody in academic science uses scala [insert obligatory indignation about the wonders of functional programming here]. The number of scientific libraries available for scala is absolutely dwarfed by the C++/Python ecosystem.

Note that I'm not saying that scala is a bad language; I'm saying that every language has advantages and disadvantages for different kinds of work, and different mindsets.

I won't presume to accuse you of this, because I don't know your position, but this kind of viewpoint is typically held by the kind of programmer who dramatically underestimates the complexity of scientific programming.


Shrug. In a previous job I ported scientific code from R to Scala and 99% of the time it was very straightforward. I'm not saying that scientific code is easier than any other code (if anything the opposite is true), I'm saying that the things that make a good language for science are the same things that make a good language for anything else.


Because we have to reinvent the message bus every fifth year, so we have to de-invent it in between.


Well, I guess most web app are built by php, Python which do not have a good module isolation / viability enforcement feature, and weak type system. So that's how adding a network layer can prevent people from abusing the interface.


I have been doing it since the early 90's.

It is called unit, module, package, library.

No need to put some network layer, with its own set of problems, between method/function calls.


Libraries typically don't have databases or daemons. It's operationally very different.

Can I envision a language-specific framework wherein "microservices" are just plugins that implement interfaces (Java interfaces, for example) for exposing RPC endpoints, which you then "mount" in your framework, allowing developer-enforced separation of concerns, but still allowing a single codebase to share things like config structures, database accessibility, etc.?

Sure. I can't imagine why anyone would want to do it that way, though. For example, you couldn't realistically open-source one of your services, or write it in a different language. Service-specific database stuff would get hairy, quickly. And so on.


Daemons are called threads and coroutines.

As for the rest, that is what a proper design of module interfaces is supposed to be.

Apparently too many people nowadays don't read books about ADTs (Abstract Data Types).


You are not getting more reusability from a bunch of microservices, compared to one big service, I don't see where you got that?


I also don't see how microservices are more reusable.

However, the replacability is usually much better due to being forced to define sharp boundaries with clear interfaces.


But that is the thing.

It just means that instead of using the language features and creating clean interfaces across modules, they were witting spaghetti code.

With microservices they get to write spaghetti REST calls with the added fun of distributed computing failures.


Normal software libraries also have sharp boundaries with clear interfaces. Why use shitty RPC-over-HTTP? At the very least you can use something sane like Thrift.


> the replacability is usually much better due to being forced to define sharp boundaries with clear interfaces

If by "sharp boundaries with clear interfaces", you mean API specification, how is it a microservice thing? I thought the benefits of microservices lie in behind-the-scenes details.


I think it's the old loose coupling vs. late binding discussion. Some people confuse the former with the latter.

You can very mechanically achieve late binding with microservices, but it takes extra effort to actually achieve loose coupling; and guess what, the effort might or might not be less than doing loose coupling within the components of a monolith.


I was just reading the other day this article about the Backend for Frontend pattern (BFF) which suggests the best way to avoid tight coupling is for services you don't want tightly coupled to be owned by different teams, and to allow reasonable duplication of functionality between them. Everybody knows how hard it is to get another team to implement functionality for you.

This is where I got the BFF pattern from:

http://samnewman.io/patterns/architectural/bff/


I fully agree with this. If a system cannot be extended by some external team using only external extension points, all claims of modularity and architecture are idle talk.

I tend to use MEF (on .NET) and OSGi (on the JVM) not because I actually need extensibility, but because I want to enforce certain architectural constraints.


You missed my point. I did not say reusability is the only factor that dictates the use of microservices.

If we wrote everything as one big service, we'd lose the dozen-or-so benefits of microservices.

Hypothetically, if we rewrote what we have today as one big monolith:

* We couldn't selectively open-source parts of it.

* It'd really all have to be a single language, as opposed to the three we use right now.

* ...and if we decided to change the language (like we're in the process of doing with a transition from Ruby to Go + Node.js), we'd have to rewrite all of it.

* Deployment would be potentially more brittle since you're deploying a single codebase. Our apps are designed to gracefully tolerate partial unavailability.

* We'd have to be careful to control individual services' resource footprint. We have some services which are super light and have very few dependencies — they don't carry a monolith on their back.

* We'd have to jam in product-specific stuff for N different products into a single codebase. Some of our microservices aren't reusable; they're completely product-specific. Squishing lots of unrelated concerns into a single codebase is not feasible.

* Developers would be trying to tightly couple services and violate separation of concerns faster than you can spell "technical debt".

...and so forth.

Maybe I didn't articulate it well enough, but my original point was that microservices are great for all sorts of reasons, but also a too-often missed point is the benefits of reusability if you just design thing right.

Put differently, microservices are a lot less powerful if you don't design them for reusability (and, as a prerequisite, multi-tenancy)


You are if you care about whether you're deploying things that aren't necessary. With microservices you can take a single specific service that two projects share and reuse it. With a monolith you can't unless you deploy everything else as well. That can easily fail a requirements check (say, hardware requirements are too costly) and force you to redevelop the service for the second project.


I'm not trying to be facetious, but isn't that just called a library?


Not as I understand microservices. A library would be something that's compiled or included in to your application code. A microservice is a separate process that the application communicates with over a protocol (eg http).


I think the point is you can achieve the same result by just using a library.

There are potential benefits and downsides to services over libraries of course, but unless there are specific requirements that make a library unsuitable, all else equal I would rather just use a library.


There are a number of downsides of libraries that make services attractive by comparison. Of course, it's not as if one or the other is better in all circumstances.

First, libraries don't encapsulate very well. A storage service hides its representation from users. A storage service can change its implementation at any time without users having to know or care. Perhaps you migrate from Cloud Provider X to Y. A shared storage library, on the other hand, can't just change its implementation on a whim. Libraries vend functionality, and they encapsulate functionality at a coding level, but they don't encapsulate much at a systems level, especially in distributed systems. For example, if this function would talk to a data store like a database or another remote service, then under the service approach, only one application talks to the database, while with the library approach, a ton of applications using the library do -- and now they all need network access to that DB and credentials to talk to it. They care about the details of how to reach it. Every library consumer needs that configuration.

With a service, you can implement security features like input sanitization and access control on the service-side. With a library, you're reliant entirely on trusting its users. (A library can have these functions too, but a malicious user or attacker can bypass them if they compromise the machine it's running on. A service implementation runs in a different security context.)

When a library changes, you need to tackle the problem of distributing the new version to every application that uses it. When a service changes, you just deploy the new version, and all systems using it immediately experience the new version. When a service changes, you can completely coordinate the way in which the new functionality rolls out. You can pass 1% of requests through a new version of the code, if you want. You can pass requests through both versions of the code, and compare one to the other. You can in principle do these things with a library, but few organizations have the ability to roll out updates to all usages of a library nearly as effectively as they can change and redeploy a service. Services allow the service owner to centrally control the behavior of the functionality in all circumstances where it's used.

Libraries don't help you scale different parts of your system differently. For example, let's say that the storage service needs to cache a lot of content for good response times. If that's a library, then all of the applications using it need to have adequate RAM or disk for the cache. With a storage service, that can be hidden behind the service interface, and only the machines running the service need the extra RAM/disk.

Services are portable in a way that libraries are not. A library needs to be implemented in a way that can be consumed within many platforms. For example, it might not be easy to vend the same function for applications on Windows, Ubuntu, MacOS. With a service, I might vend only an interface description, or lightweight clients for the platforms I target.

A service separates the consumption of the functionality entirely from its operation. If there's a defect in the service, like it's returning the wrong result, then that's the service owner's problem to fix and roll out an update. If the library everyone is using in their applications is returning the wrong result, then that's now everyone's problem.

With a service, I can measure anything and everything I might want to know about the behavior of an operation. I can record metrics and log files and anything I want. With a library, there isn't always an easy way to get the information out of applications that use it. Services make applications easier to understand and troubleshoot, because you can reduce the amount of code running on any given machine (VM) to the smallest reasonable amount, compared to applications that embed potentially many libraries. Debugging a library is tricky because you need to debug the application too. I can debug or instrument a service all on its own, without affecting its clients.

The right answer to this question might depend on the scale of the component's usage, the scale of the functionality, and how much you care about these properties, and encapsulation, and coordinated update, etc. Services start to become compelling at a sort of inflection point where you have a lot of usage, enough that updating all applications using a library would be problematic, and you want to decouple things (both components within software and the organizations that develop and run them). I probably have more to say about this, but I'll leave it here. There are plenty of reasons why you'd choose to make something a library, but there are also good reasons why to choose a service.


Microservices also enable you to scale individual parts of your application. If you need better performance for, say, image compression then it's much easier to add resources to a service that lives in its own process space (or its own server, or rack of servers, or data centre..) than it is to improve the performance of a specific library in a larger application.


With some work you can have the some behaviour - but the microservice is also about the data (like user ids and passwords for a login microservice).


So you deploy your easy to manage monolith, and expose your library through an rpc service. rest, thrift or whatever is hip these days.


From experience at the classic ball o' mud web servicey end of the spectrum, because it gets to hard to manage and deploy and test services that you end up standing up new services to do "the same thing only slightly different" because reusing existing services is too hard.


It's especially fun when you get to write the monolith 3 times over for web, ios and android.


Why would you write it three times?

You write it once, have it expose an API. Have your Web, IOS and Adroid client use the said API. Easy pease. Optional: allow your customers to use the API, too.


That's not 'microservices'


Sounds kind of like what I would do, except I would use a database table / controller for each of the things you consider a microservice.


Equating "Microservices" with "Information Superhighway" really shows the tech bubble that this article is written in. "Information Superhighway" was a vacuous but mainstream term used by politicians and public figures. "Microservices" is a tech hype train led by expensive consultants and pickaxe companies thriving off the current tech boom.

Don't get me wrong, a service-oriented architecture is the only thing that scales to large companies. Once you get to dozens of engineers and millions of lines of code you will inevitably need to have an SOA because Conway's law. Also, there is a learning curve to building microservices which improved tooling really helps with.

However the thing that really grates at me is how these articles say things like:

> Services are now an everyday, every-developer way of thinking

With nary a mention of the overhead. There is no way around it, distributed systems have an irreducible complexity no matter how good your tooling and boilerplate is. You have to put in extra work to decouple everything and handle failure in a way that actually reap the benefits of the distributed system. And in the end, what these articles always gloss over is the interface between these systems. If you can easily define an interface between systems that stays relatively stable as the service evolves, then congratulations, you have a good candidate for a service with minimal overhead. But for most applications, those interfaces are shifting all the time, and there is no better tooling than multiple logical services running within one binary and build system where integration testing and validation is cheap. This is a real fucking problem people, it's not going to go away because there's a couple billion dollars worth of venture-backed startups ready to blow their cash on you in the vain and most likely misplaced hope that they are actually going to have to scale to dozens of engineers. Premature scalability is one of the worst siren songs for young engineers and we're seeing it in spades right now.


It's all a balancing act. The two main contenders are developers and scalability.

It makes no sense for a single team to run 2000 microservices that come together into a single app. The amount of overhead for managing so many interfaces is insane.

At the same time, it's hard to justify 2000 developers working on a single binary. You end up with entire teams dedicated to managing and deploying. Companies do it (Google), but it's not without costs.

If every microservice runs with the same specs (container size/# of containers), then there is nothing gained from scalability. If anything, you're probably wasting a large amount of resources if your containers cannot shrink any more.

At the same time, if you are deploying thousands of copies of of a single binary, when most of the resources go to 1% of the code, then you're wasting resources with needless copies.

The (micro)services fad is definitely brought on by the recent rise of virtualization. It's probably a bit overboard.


You got the Google part wrong. Google is all in on microservice architecture and has over a decade of experience doing it. There is not some gigantic google.exe program. Perhaps you have heard that Google uses a single source code repo and drew an incorrect conclusion.


Speaking as a young engineer, I can tell you many are already looking past microservices towards things like AWS Lambda, AMQs, and BaaS, which make a great deal more sense. Why? Because it helps reduce dev effort down to purely the logic you'd have to write no matter what, with better guarantees about reliability and scalability, and less maintenance. I hesitate to say 'serverless' because that still feels somewhat out of reach, but that's the direction things are trending towards.

I also get the sense that a lot of purists moulded in the ways of yore are alarmed at the waning relevance of their skillset. This to me seems like a bigger problem than the evolution of software paradigms.


Oh yeah, I don't think it's bad research going into this. I mean I've been writing software professionally for almost 20 years now and I am super excited about (for example) containerization and other trendy areas. I think it's great that people are pushing the boundaries of SOAs, and I think microservice is a good term describing how the boundaries are being pushed. We will all benefit from the improved tooling and experience gained.

My only complaint is this ostrich mentality of declaring ubiquity and ignoring the downsides and unsuitable cases. I suppose drinking the koolaid might be necessary to really buy in and push the paradigm forward, but sometimes it feels like willful ignorance.


I agree completely. I may simply have had the luxury of so far working in environments wherein software fads aren't hailed as Swiss-Army panaceas. Here's to hoping my track record doesn't ever trend Dilbertesque :)


Me too. At this point in my career I've worked very closely with more than 100 programmers for long periods of time, and only one of them would I classify as being overly interested in fads. To the contrary, I give a ton of credit to the vast majority of programmers as always being pragmatic about the problem at hand.

I think the problem lies more in the PR, advocacy and conference circuits where there's an incentive and agenda to present things as silver bullets. It mirrors a greater problem in public discourse that everything has become so polarized that nuanced debate is drowned out by noisy, confident blowhards.


"The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt." (Bertrand Russell)

Back when I was still an employee, I strove to take a more intermediate tact that let me be more vocal about technical direction. What's surprising is how willingly people with authority will listen to what you have to say, once they realize you know what you're doing. I've also been fortunate to have had roles that enabled that in the first place. And now I'm trying my hand at entrepreneurship, which feels practically utopian :)


Yeah, we were debating about whether I should write a post about "the end of servers" instead of "the end of microservices." I thought we'd get a livelier discussion with microservices, ;) but I totally agree that they are already getting to be yesterday's buzz word.


While I appreciate the allure of such models, one point gives me pause: tracing and debugging. Maybe the tools have caught up to the point where tracing why a job wasn't triggered or failed in the middle is a simple task, but it always seemed tedious to me. My hope is that the explosion of the microservice framework will trigger a corresponding explosion in the tools to trace/debug it.


I think this is a smaller evolution than microservices / non microservices.... as others have pointed out, the real discussion is autonomy versus centralised control.


If that's the case, I'd argue we're conflating software development management with software engineering execution. It's illogical to assert that utilizing microservices implies entirely forgoing centralization.


I agree with you. In my defense, people who want microservices are generally asking for developer autonomy, but just using technical language to do it, rather than managerial.


I can believe that. This is why it's so crucial to have good management. I've also realized through my own experience that a good manager has to be in the trenches coding with his/her reports, at least part of the time. It's amazing how fast you can become irrelevant when you're unplugged and stuff like this is exactly why.


> AWS Lambda

Has local dev/testing been solved for that yet, and different environments, etc?


Getting there. Wrapping Lambda functions for local execution isn't too hard since it's really just JS. e.g. https://www.npmjs.com/package/node-lambda


> Premature scalability is one of the worst siren songs for young engineers and we're seeing it in spades right now.

This. What happened to "make it work then make it scale" ?


> This. What happened to "make it work then make it scale" ?

The fact that it is wrong (or rather, incomplete)

It should be "make it work with a clear path to scaling, then make it scale" - that's true with respect to both software engineering and business plans.

It's the same with "premature optimization is [the|a root of [all]] evil" - Premature optimization is a waste of time and often paints you into a corner. However, if you don't think about the bottlenecks and how to optimize them in advance, you more often than not find later, when profiling, that there are not enough hotspots, and the performance is "lost" all over the place.


Not my experience at all. Systems that were designed as simply as possible can be unpicked and made scalable - it takes time and effort but it's doable. Systems that were designed to be scalable are much worse, because you have to undo all of that work first before you can start actually making them scalable.


Simpler systems are easier to scale than more complicated ones, for sure - but things like making sure that your database is easily shardable are often cheap (even free) before you start, and might be extremely expensive later on -- I'm familiar with a project that eventually spent a few hundred thousand dollars on some high performance database (Oracle RAC or something, don't remember exactly), where -- if the database schema was shardable in advance -- they could have been ok with 10-20 cheap machines running MySQL.

And I remember an earlier project that had to be completely rewritten in the early '90s because at the time, Oracle couldn't yet scale even throwing money at it (and DB2 reportedly could - but the required Mainframe was not an option).

Simple is often best, yes. But not oblivious-to-the-future simple.


Perhaps you guys are talking about different things? There are certain architectural decisions (schema design, etc.) that can dramatically impact future scalability. Sometimes if you implement it the "simple" way the design can very quickly reach its limits and have to be replaced.

And, honestly, sometimes you've done variants of the same problem enough that you know where some of the bottlenecks are inevitably going to be - so why not design them out right at the start?


This is a great point. I don't think the hype is due to a fad alone. There are a lot of good reason why microservices are much easier to create and deploy today: containerization and the associated technology around it. This makes microservices a very natural paradigm for app deployment.

Of course, its not a holy grail. But we see that kind of thing over and over again. I think its OK; it gets developers genuinely excited to try new technology. Its only when this hype influences dangerous decisions that I'm worried about.


I wrote the post with pretty rose-colored glasses on. :)

I totally agree that microservices can be a form of premature optimization, in particular because of the cost with today's tooling. But I think there's hope that lot of those costs will go down (both in terms of dev time and infrastructure) with things like AWS Lambda, etc.

Anyway, if devs think a little more about the interfaces, I think that will be a good thing.


One thing I don't like about SOA is that an error does not have a full stack trace. I know Zipkin exists but it's nowhere close to what we had in a monolithic app where you could just put a breakpoint and trace back to where exactly an error is thrown.

If we can find a way of running a giant monolithic app in development and production environment without vertically scaling our machines, I would rather have that.

Every bug I'm working on is like a mystery that I have to hop to many services to find what's going.

I also think HTTP is the worst protocol for apps to talk to each other.


How is that different from debugging a multi-threaded or networked application? In both cases barely entering the debugger changes the behavior of the system. Heck even an asynchronous program running on a single thread doesn't have full stack traces.

Having worked with both monolithic and SOA apps, the later yields radically simpler architectures and from there you spend a lot less time debugging.

The older I grow as a programmer, the more I dislike monoliths. I'd rather have simpler programs where entire classes of bugs are guaranteed never to happen. I have yet to see a single monolithic app without serious technical and conceptual debt. The worst thing is that back when I thought monoliths were great I had absolutely no idea things could be so much simpler.

Also, HTTP/1.1 is a fantastic protocol. Its dead-simple to implement, debug, cache, send through proxies who won't understand your custom headers or body format and whatnot. It even gives you an extra layer of routing on top of TCP/IP! This is exactly what you want to build systems with.


What's wrong with HTTP? If you're going to build an RPC protocol, you are going to need encryption, stream multiplexing, a way to distinguish RPC methods, support for extensions (e.g. auth), and compression.

Add the fact that high performance HTTP servers and clients exist in most languages, and building an RPC protocol on top of HTTP sounds pretty attractive. No wonder gRPC did exactly that.


Tons of round trip calls potentially in most RPC-style systems are one start, which Cap'n Proto addresses with its unique style of avoiding round-trips for many network transactions which can drastically reduce network chatter. Security is addressed simultaneously (but not necessarily sufficiently for many people or regulatory bodies) in Cap'n Proto at least via use of capabilities instead of building it into transport layers.


Any service taking requests from a client generates a UUID and passes it through to any additional services it calls. Most microservice frameworks have this functionality built in.

You log to a central store such as an ELK stack or any of the great third party offerings. When you need to see the entire stack trace you search by the id.



I was thinking of a simpler "Y U no log?!?" with correlation IDs but this is much cooler.


Depending on your requirements micro-services may be a good or a bad solution. If the communication/protocol becomes too chatty that may be a sign you are doing it wrong.


Tracing and debugging-in-developer are solving different problems, though: tracing is there to help you understand issues that you can't reproduce in development. And for the record, Zipkin is only one tracing tool and is really aimed at helping you address latency issues, not errors.


When this year I came back from Dockercon I immediately wanted to write something very similar to what this article describes. I wanted to imagine a world where Containers and Microservices were part of the past already, and so I wrote "DockerCon 2020" [1] and how it will look like.

[1] https://medium.com/@sinzone/dockercon-2020-a513ed04eefb#.rbz...


I'm sorry -- I think the author completely misses the point about why microservices were controversial at all?

Distributed systems are not the same as centralized ones, and you cannot paper over the differences between the two. It is wrong to think that distributed microservices will completely replace centralized services in some future paradise. The difference is not a tech fad; it's more like a law of nature. Distributed systems should plan for network failures, yet nobody wants to get a "503" from their CPU.


You will get 503 from your kernel if you try hard enough. Handling this is technically required in any application, but often ignored, especially when allocating memory. (Because there out is a deferred failure.)


I'm not totally convinced that allowing developers to build faster is really all that great of an idea. At least not the sacred ideal that seems to be accepted without any question.

Most of what I see when people are moving fast is building things as fast as they can think of them based on the first idea that comes to mind that sounds like it might get things done.

But the reality is that the first way that you think of implementing something isn't always the best. It's often just about the worst. Giving people the ability to take any whim of a design and run with it all the way to production isn't the best thing overall for software quality.

Perhaps I'm alone here, but I'd like for developers to slow down and put some thought into what they are building, and how it's supposed to work, and if it's going to be able to do what it needs to do. I see a lot of "close enough" in my line of work.

I know it's different in a startup, where testing the idea now is important, and I'm not slamming that. But the vast majority of developers don't work in startups where getting a product to market before a competitor is the difference between making billions and going home broke.

We temper our desire for perfection by reminding ourselves that good enough is okay for now. I'd like us to temper our desire for speed by remembering that there is such a thing as soon enough.


What you are describing is definitively an issue I face every day but I don't think speed of development is the issue. Higher productivity is good but often what is called higher productivity is just due to carelessly ignoring design and planning in favor of get things done now mentality.

Often the worse offenders are rewarded for being highly productive and the people who end up having to cleanup, refactor and get things actually working are not acknowledge for the vital effort they put in. In my view this is the result of bad management incentives and failure to properly asses results and contributions.


"Microservices" are a new name for a very old concept.

This is just low-coupling, high-cohesion by another name.

Small, composable, decoupled, highly cohesive components are what "good" software has been about for decades, but it now has a new name in the server s/w world; "Microservices".

Only the name is new & hyped. The concepts have been true forever.


There is software and concepts around managing microservices that didn't exist before, and so on. Nothing can be new when you use such a vague way of comparing things. There were electric cars in the 1800s, why talk about Tesla?


...so what is a new concept regarding microservices?

I don't see anything new... new tools, granted. Then again, there's new tools being produced every day for every architectural style.


Another good article on this subject: https://m.signalvnoise.com/the-majestic-monolith-29166d02222...

My experience with microservices has been pretty painful. My analogy of microservices is it's a bit like building a car factory on two sides of the Danube. And there's no phone line in between. You've got a factory building cars up to a certain point, but then they have to stop work and pack it all up onto a barge, figure out how to fit everything on the barge and send it away across the river for the other side to spend time unpacking & figure out how it all fits together...

As a django guy, I've tended to follow the pattern of spending time making my models nice and rich, with useful traits which will be helpful at all levels of the app down to the views. To then have to pack this all up and deliver some "dumb" json to the other side feels like a massive waste of time. With microservices I spend my life marshalling data around.

And the number of times I've realized I've just spent an hour discussing the philosophical implications of how a particular bit of the rest interface should be designed, all for an interface that we're the only consumers of and doesn't need to exist in the first place... I've found depressing.

The ramifications on testing are a further story. Do you have to test all kinds of ways you can receive your rest requests malformed if you're the only consumer and know exactly how you're going to use it? Good use of developer time?


Ok. I'll be the guy to bring up the Elixir/Erlang ideology that has gained such popularity here. Although I don't have a ton of experience with it yet it seems like the possibility of having the idea of "services" built into the language/framework design is very realistic. That's exciting for me. Although true SOA can be a mix of many technologies I personally find that scary. What happens when your whole platform ends up as a web of services on different technologies and you lose various talent? Now you have to recruit all kinds of different expertise or hope that certain services keep ticking without that knowledge in house.


I don't know about the microservices and stuff, but I've got one cumbersome monolith to deal with, and it had started to rot (you know, rely on outdated dependencies that one can't upgrade without significant effort etc etc). Splitting it to a few isolated different systems looked like the only sane choice.

Luckily, I've had to redo one logical part of the monolith anyway, because of some changing business requirements. So I made it a separate independent project, that had used all the modern currently-stable tech (rather than few-years-old one + accumulated baggage of the past architectural mistakes) and it all went quite nicely.

It took me 1.5 weeks (quite busy ones, but meh) to extract all the old code pieces that I've needed, clean them up, update with the new logic, and get the freshly-minted project ready, tested, packaged up and running in production. The only thing I've lost is ability to run cross-db queries (we just have a folder of read-only SQL query snippets to fetch some useful statistics once a week or so), because I put the data in a separate database. I hope, postgres_fdw would work when I'll need it.

Would I've tried to update the whole monolith, it would've taken me months.

So, the next time I'll work on some large-enough part, I'll probably extract it into a fresh project as well. As I see it, I'll end up with a remains of legacy project surrounded by a few small(er) monoliths. And then the legacy piece would be small enough to get cleaned up.

(I don't know about micro- scale and putting every tiny thing into a different microservice, though. I have an impression it requires a lot of extra unwanted cognitive load to manage so seems like an overkill to me.)

So, my point is: software (code) rots over time. Multiple projects (services) allow to update pieces at different pace, which is less stressful on developers.


It is weird to read people write about microservices (or some other tech fad) as if it is this otherworldly thing that requires instruction and training. So many words dedicated to describing the supposedly bad old days!

All this stuff is just another aspect in the life of a practitioner of computing. A proper expert should see these things not as a fad, but as a collection of techniques that can be added or subtracted to at will depending on the prevailing need. It's silly to declare any of these fads dead or alive, they're just simply techniques that ...people... have bundled together under a common label


I think after a certain amount of years you start realizing every new fad is just a rehash of some old ideas. (You ought to notice this as you get more experienced).


...but you can write (and sell) books about, talk at conferences too


The title is link bait and does reflect the arguments put fourth by the author.


Not to mention it's a plug to the product he's selling.


Maybe a more appropriate title could be 'Microservices are the Norm'


I don't think they are. I would be surprised if 10% of new code right now was coded in microservices.


Ya, you're probably right. Its likely just my bias. It feels like its the only thing i encounter at my day job.


In S.F. startups maybe, not in the enterprise world. Enterprise is barely SOA today, they aren't going to throw 10/20 y.o. apps that still work for the latest fad. Just like MongoDB didn't replace Oracle or SQLServer just because it was hip.


The enterprise world I lived in was all SOA 10 years ago. WSDL services everywhere.


I joined a company where the proof of concept had, inevitably, become the monolithic application we would work on for the next two years. Everyone on the team agreed that the monolith would be a liability so we started to share knowledge on microservices and plan for that in the future. To do this we stuck to a handful of rules. Systems should do one thing and do it well, with a well defined api and protocol. Whilst all the data may be in the same redis and MySQL instance we made the data store configurable as well its location, and made sure systems did not read or write each other's data. We wrote generic systems as libraries with no dependencies on the rest of the monolith. The results of this work, which was a lot of refactoring, is that when we decided to farm some work out to a contractor we could do so as a microservice. They worked in their favourite language with their own tools, implementing the api we wanted over a specified protocol. At any point it would be possible to split out services to scale them horizontally, but we didn't have to until we need to, because every split increases the operational costs and complexities a little.


I feel like maybe the Big Idea of microservices is that web APIs provide better isolation guarantees than library calls, and now with the move to SaaS either the scale of our applications is large enough or (more likely) with virtualization the intra-server network latency is small enough that we can afford the extra overhead of web APIs relative to library calls in exchange for that isolation.


When it comes to services, I think it's worth to talk about one common use case which comes with different motivations and problems: adding new features to old, probably poorly engineered, monolithic application. Features that are not tiny yet-another-crud-on-a-new-table but completely different than most of the existing functionality.

In this case, they really pay off if they are separated well which sometimes is hard. But executed well, it allows to keep moving quickly as the requirements grow. Of course, this is not an excuse to avoid refactoring monolithic application, improving testing, etc.

I've worked in such a setting in companies, and both times it was a win and helped to build important to the business features really quickly and reliably.

But is it worth to write an application from scratch in a service oriented architecture? Probably not, most of the time. Especially if 'product to market time', 'MVP' and similar concepts are very important for you.


I had the feeling that microservices would add too much of complexity, but with FaaS this is canceled out by the fact that almost all server management complexity is handled by a different company.


They have the word "microservice" on their homepage :D


John Titor is back!!


Did he bring back the IBM 5100 that he promised?

https://en.wikipedia.org/wiki/John_Titor


Every generation's microlith is the next generation's monolith.

For this generation, the end of microservices will be when we can look at a cluster as one big unit, and deploy a microlithic monolith on it.

For the next generation, who knows how they will slice it up.


re: "it was never the size of the services that mattered: it was the connections and the relationships between them"

re relationships I would say these are better thought of through the lens of separation of concerns.

imo, connections, a la protocols like http will fade into the background and be a focus of ops. A piece of code knows to write/read data to/from _________, for which it has been authorized access by address/hashname/entity.


Not to seem naive but I work with a lot with microservices and I've never heard of lightstep...

... follow up, what's everyones opinion on this?


Ben Sigelman, one of the founders, is a former Googler who's got tons of domain knowledge about profiling distributed systems of services and about how to structure calls and dependencies within those systems. He gave a fun talk at the SF Golang meetup in 2014. I, personally, am very excited to see what he's up to and will be keeping an eye out for more Lightstep stuff - I guess it's his newest thing.

(if you're reading this, hi Ben! Sorry we never got together last year)


One of the best recruiting/employment branding pieces I've seen. Genius!


The Gartner Hype Cycle appears to be accelerating...


nitpick: nice graph on the "information superhighway" but a label about exactly what is being measured (0.000055%, of what) would be great. Is this a word count? Or its rate of change? There's a reason our (stuffy) professors/sub-editors always insist on sources and accurate labeling. I'm only interested because the most recent data points, despite large decline, still suggest a non-negligible use of a term which I haven't heard in years other than in a small amount of anecdotes/jokes.


It's just % of bigrams that appear in Google's sample of books. [1] Not especially meaningful, I admit!

[1] https://books.google.com/ngrams/info




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: