Hacker News new | past | comments | ask | show | jobs | submit login
The microservices cargo cult (stavros.io)
277 points by stelabouras on Oct 6, 2015 | hide | past | favorite | 173 comments



Yes, a thousand times yes! Microservices are yet another tool in the box but they shouldn't be used on everything. And it makes no frigging sense for a startup or any new project to start with microservices...

The main advantage of microservices are in scaling and in reducing complexity of a big system but those advantages only make sense when you have enough traffic so that you have to scale or when your system has become complex enough to warrant microservices.

When first starting development, the most important thing is speed of development to get feedback from users as soon as possible. It's much faster to develop a clean, well optimized monolith than to spend a lot of time developing a whole bunch of micro services. And while thinking in term of microservices will help you to better conceptualize your software architecture, at this stage, you don't have all the informations needed to have a clear idea of what the final architecture will be and you'll often end up with microservices that are divided in suboptimal ways causing a lot of pain.


The article didn't even mention a lot of drawbacks :

Changing anything means changing half a dozen programs. Potentially in different languages.

Building any new feature into the program means you get to "walk the dependency tree", making sure no serialized stuff from a new version gets sent to an old version. Good luck with circular dependencies.

Related: deleting a field ... never going to happen. We're talking years of planning. Any field added ... ever ... has to be taken along for the ride for years. Oh and don't even think about deleting the code that interprets and supports the field for the same reason.

Also related: best of luck with the interactions between "oh we're not going to do that after all, sorry about asking you to already push that and update the database" and the "we really need to do this next thing right now" features. And the

Constant serialization overhead. People go overboard with this microservice crap and the result is that 99% of your program's cpu time is used putting objects to json and back (which is very expensive due to constant alloc'ing), and you have 10-100 times the normal memory overhead.

Microservices should be like optimization : build your program without using them and then figure out where they'd make sense.

yes I know, you can sort-of avoid it these days with cap'n proto and flatbuffers


> Building any new feature into the program means you get to "walk the dependency tree"

Same with monoliths, except it is often a worse experience. At least with microservices I know the interface is all I have to worry about. In monoliths of any size inevitably someone has reached into parts of the program they shouldn't have just to get something 'done quickly'. And this is one of the main benefits of microservices - enforcing the interface boundaries.

> Related: deleting a field ... never going to happen. We're talking years of planning. Any field added ... ever ... has to be taken along for the ride for years. Oh and don't even think about deleting the code that interprets and supports the field for the same reason.

That's just poor design and happens just as much in monoliths. The db is the almost always the challenge when removing a field. I could argue that microservices make it easier since the service providing access to that field could remove it from the db and then dummy it out until clients are updated. Also, why wouldn't someone remove the field from all the clients when removing it from the supplier?

With that said, I agree that microservices should be something that happens organically from a monolith. Think about an amoeba that reaches a certain size and only then do parts split off. I also think there is some ambiguity to what constitutes a microservice. I'm sure my idea of proper granularity is different from others.


> At least with microservices I know the interface is all I have to worry about.

You wrap your code inside a reusable library if you want to encapsulate it and expose only a small public interface. You don't start a new service for that.


It doesn't ever work that way in practice for larger applications though... once you have multiple teams with many developers working on something, those library boundaries fade, and tbh sometimes they should just be bypassed in favor of more shallow layers (for a monolith).

Once there are more than a handful of people working on anything no one person understands the details of the whole. Smaller, specialized services can at least act as and enforce those boundaries as upstream comments said.


>"Once there are more than a handful of people working on anything no one person understands the details of the whole. Smaller, specialized services can at least act as and enforce those boundaries as upstream comments said."

Until you have so many services, that you also need someone that understands the details of all the different services interacting.

Really, if you can't control your developers from leaking through separative concerns, then they'll do it in any architecture. Monolithic or microservices-based.


I think you can have separated services by concern without getting too deep in terms of complexity though... I wouldn't use the term "micro" though... I also think if you are only doing mutation, you can combine them, or do libraries in a control service... there are lots of ways to skin the issue...

Digging down through 7 dll projects in one solution, then 11 in another to add a small feature that should take 10 minutes taking 4 days isn't so much fun... same would go for 30+ service projects...

I'm also not big on starting with smaller/micro services either.. or ORM, or DI/IoC for that matter... Build the monolith and break pieces off as needed. I'm fine with that... I was saying it's easier to herd cats when they're not in the same space as eachother.


The other benefit of a microservice is you can understand it. You can have the entire thing in your head. It's easier to maintain. With a monolith, good luck.


deleting a field ... never going to happen

This is true for ANY class you release. I'm not even talking about a service API: if you published a class, and it gets used, you should never delete or change fields in it. You might get away with adding fields to it, depending on various factors.

This is just the OCP; why is it a surprise?


Depends. Things are rarely in absolutes :)

If you serialise / deserialise you are probably coupled to an underlying type. Modify it instead of extending it and you have pain.

If you parse the raw message, or serialise into a hash, or dynamic type then you have one place to make amends (your parser). Taking things further you can version your messages and map them to versions of parsers, as such you'd never violate OCP (or at least simply mitigate).

Granted all of which is additional complexity compared to your stereotypical ball of mud / monolith that'll let you refactor field name changes etc to your hearts content. Obviously ball of mud approach has draw backs too...


I always found getters and setters very helpful for this case. In C++ and in Java. Doubly so in Java because the following works:

1) field is no longer important, gets replaced

2) write a getter that still gets the value of it, but delete the field itself.

--- this is where it stops for C++

3) use the "inline function" refactoring on your entire codebase

Done/done. You don't even have to rewrite the tests, in fact it might be better not to.

(Of course getters and setters are somewhat verbose in most cases. Object pascal has the perfect solution though)


> Changing anything means changing half a dozen programs. Potentially in different languages.

Wouldn't this only be a problem if you broke up your service along the wrong lines?


I think he was referring to where the article said that one of the advantages of breaking a monolith into any amount of services is that they can store the data in the location and format that suits their task best, and also be written in different languages.

I don't think it matters how you split it - if the team working on one module prefers ruby and the team working on another requires python for any 'serious' work you need programmers who understand both to get the 'big picture' of your codebase now. That one guy who writes all his stuff in PERL can write his plugin that way because it just interacts via API. :O

Now how do you debug something?


It's Perl, not PERL.


Boundaries have problems in monoliths as well, can't really imagine it would be any easier with microservices.


If you're using a flexible serialization scheme, such as many JSON libraries, you can exclude the field and have a sane default.. you can also handle this at the service boundary for older clients. Personally, I find this type of coercion to be more reliable than explicit case testing, and usually leads to fewer unexpected results.


Martin Fowler wrote a great post "MonolithFirst" [1] just a few months ago. Quoting him: 1. Almost all the successful microservice stories have started with a monolith that got too big and was broken up 2. Almost all the cases where I've heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

[1] http://martinfowler.com/bliki/MonolithFirst.html


That's somewhat disingenuous. Historically all software has been "monolith first" and has only adopted an SOA out of need. The microservice mindset is relatively new, and we're not really going to see it mature for another year or few.


"relatively new"? I've seen lots of places use little http endpoints for discrete tasks. Whether the endpoint lives inside a shared container or has its own embedded server is an implementation detail.


If the argument we're going with is “well, microservices-first is relatively new so we won't know for another year or 3 if this is a mature route to take, but everyone's trying it anyway so let's do that", there is NO WAY I can or want to sell microservices-first to stakeholders.


That's not really what I meant. There's a massive survivorship bias with monolithic architectures.


> And it makes no frigging sense for a startup or any new project to start with microservices...

Doesn't that really depend on what the startup is doing? If your goal is a SaaS platform things might be different.

> ..you'll often end up with microservices that are divided in suboptimal ways causing a lot of pain.

Or one large monolith that is a mess. Really doesn't matter if you go with monolith or microservice architecture, you can screw up both ways.


I think you're right, but scales will shift in a surprising way; also, it's possible to refactor to microservices bit by bit (not all at once).

(1) Easy scaling allow for worse coding (and therefore quicker developement).

For "webscale" you're right, and if you can run it all on a single cheap server, then no need to worry about it - small servers are so cheap, the cost per month is insignificant relative to, say, ramen cost per month.

But what if you write your app incredibly inefficiently? Perhaps because you develop it quickly; you're not highly skilled (or not even a "developer"); it's difficult to optimise. It's very slow, seconds per request. That's fine for getting started, but you need to scale it far before it gets to "webscale".

You could just upgrade to a better server; but at some point, the costs becomes prohibitive. The key thing about microservices is that they should be cheaper (because vendors can better utilise capacity; and customers only buy what they need, when they need it).

(2). You don't have to architect the whole project for microservices from the get go.

e.g. If there's a single resource-heavy function, factor it out into a microservice. Your main code becomes super cheap to host; the microservice only uses expensive resources as needed: when and if it runs.

===

At the moment, microservices are a pain to set up and manage. But with tools, it will become very easy to factor out a module into a microservice.

That is, assuming that code is already a separate module - microservices aren't magic pixie dust!


I'm in the middle of a microservices mess than was forced upon us. I have nothing positive to say. If you're in the SaaS space already and it's not a greenfield project, it's orders of magnitude better to deploy lots of smaller identical monoliths than it is to try and build and deploy lots of services and manage the contracts and complexity between them.

Major problems I've seen are: per transaction performance sucks due to the network or IPC channels, development friction, logical complexity, infrastructure complexity, managing contracts between services, debugging failures, monitoring performance, bootstrapping new staff and the biggest of the lot: headspace.

If you want to succeed, at least in the short term, just keep your monolith tight and fast and without sprawling infrastructure requirements. Single machine, single process, single storage engine (or database), single messaging system. Then scale that to multiple instances. If your site deployment requires at least 20 machines due to sprawl, you're going to be even more screwed when you throw microservices at it, not less. If your application is incredibly complex, it's not going to work either. The problem domain needs to be small and easy to consider as it's difficult to cleanly extract a chunk of your average monolith into a standalone concern.

There are also people with technical authority in many companies who blindly follow the latest fad without real consideration of suitability, risk assessment or accountability. If someone starts waving microservices, AWS and everything else around, they need to fight their position and everyone needs to assume that isn't the default end game.


If you understand the problem domain well, splitting up a large application into a few self-contained applications makes sense. However, I think the real problem that a lot of people face is that they try to split everything up with too much granularity, they take things way too far.

I'd probably start off any project with a monolith-first approach, and if I find parts that can be cleanly isolated that require minimal cross-communication I'll consider splitting it out; or at least keep it mind as I implement an initial prototype.


Oh dear. =/ We're staring down the barrel of the microservices gun. It's the "only" answer to a huge monolith PHP app, I'm told. We're a small dev team, have no scaling issues, we just have badly designed/disorganised data all over the place.

Perhaps I'm being selfish because having all your data in one place (esp with an ORM on top) makes my job of automation really easy. E.g. to generate monitoring configuration only for servers in accounts that are not suspended, that are provisioned, that have a public IP, that are not in the Blackhole table etc is really easy with JOINs.


  > it's orders of magnitude better to deploy lots of smaller identical monoliths than it is to try and build and deploy lots of services and manage the contracts and complexity between them.
This article from 2003 by Martin Fowler is exactly about that: http://www.drdobbs.com/errant-architectures/184414966


Indeed. I remember reading that back then!


Microservices are advertised as a means to modularization, but it's what programming language modules are for - they are defined on source code level and can be freely used in different runtime components without network/ops/version-management headaches. When you have your module defined that way, you can think of exposing it as a microservice because this may make sense for your use case.

Imagine that each Python module runs as a microservice. For many modules this would lead to huge performance degradation, for example a regexp module can be called thousands times per second, the running time of a call is usually short and replacing an in-process call with a network call will give 100-1000x slowdown.

But if you take a different use case of the same module - complex regexps running on large texts, potentially causing out-of-memory errors, then packing the module into a microservice can make sense - separate processes can have large caches, an out-of-memory error terminates an instance of a microservice only and not the calling process.

Generally I think the advice should be to always use source code modules in the first place, and create microservices using these modules for specific use cases only involving runtime needs like caching, fault tolerance, scalability.


> Microservices are advertised as a means to modularization, but it's what programming language modules are for - they are defined on source code level and can be freely used in different runtime components without network/ops/version-management headaches. When you have your module defined that way, you can think of exposing it as a microservice because this may make sense for your use case.

Yes yes yes ! If your architecture and data model are funked up, then it doesn't matter how you implement them - you are screwed. On the other hand, proper modeling with separation-of-concern and well-defined interfaces will let you implement as anything you need, be it microservices or function calls inside a monolith.


Most of the runtime needs in this space can be managed with circuit breakers without having to provide separate processes or memory spaces, be they temporal or resource based.

For example your regex, could be given 20ms to run and use a maximum of 8Mb of heap before it is interrupted. This can happen on a single thread and drop back to the caller if an exception is thrown. I'd love to see a language feature which defines the maximum stack and heap for a particular scope i.e. in C#:

   Breaker.Heap(8.MiB(), () => {
       Breaker.Time(200.Milliseconds(), () => {
           // risky operation
       });
   });
(we already do the time breaker, but not the heap)

Edit: correct calling convention.

Edit 2: add missing extension method brackets


Aside, for both the circuit breaker and numeric overloads, is that a particular library you're using or just example code?

By themselves, the overloads for integers like that is pretty neat, never seen that before. I can actually think of several dozen places I do things like TimeSpan.FromSeconds(x) where that could enhance readability.


I was thinking the same thing - Extention Methods on the types of common literals can make for some very pretty interfaces.

In combination with operator overloads and implicit casts and you have some really powerful tools for building readable APIs :)


It's a real library I wrote. I just noticed I missed the brackets. Quick example you can infer the rest from:

Code:

    public static class IntegerExtensions
    {
        public static TimeSpan Minutes(this int val)
        {
            return TimeSpan.FromMinutes(val);
        }
    }

Usage examples I can think of:

   var eightMinutes = 8.Minutes();
   var tenMinutes = 8.Minutes() + 2.Minutes();
   DoSomethingEvery(8.Minutes());


Yeah I figured. Is it published/OSS?


Not at the moment. I've got a ton of stuff I probably could publish however.


Just a note, because I started in the same way you have, simply writing my own extension methods in a common dll. The Humanizer library has a bunch of these features, including the 6.minutes() functionality:

https://github.com/MehdiK/Humanizer


The latest version of .net actually has a built in timeout argument to the regex match function ;)


Hadn't noticed that. Thanks for the pointer :)


Interesting. Is the circuit breaker a feature of .NET running on Windows?

To my knowledge, it can't be implemented on Linux + Java or Python (a thread can't be terminated from outside, and some syscalls involve a whole process).


It's a library I wrote.

The time breaker is actually quite complicated. It is a wrapper that sets up some global parameters on the thread for timeouts on async/await calls and handles the timeout conditions. It integrates with our own async wrappers for external http calls, message delivery, query execution etc. It only enforces that all aggregate async calls will complete or fail by the end of the timeout period. Realistically this is usually around the 500-800ms space as load spikes can break everything otherwise.


.net has a public thread.abort function which allows this, it will result in a interruptedexception on the aborted thread. I think there are ways to do this in java and python also although I know the official stance on thread.abort from the python maintainers is no way in hell.


I think this can be mostly implemented in Java with bytecode re-writing which will check for the conditions on time to time.

Stopping the thread can still leave some of the stuff in indeterminate state if the thread has some resources it needs to release manually.


Very true, but I've heard modularization touted as a benefit of microservices many times, as if it's exclusive. You can get modularization as a first-class feature in your favorite language, and almost for free!


What you're describing is an operating system.

Replace Microservices with Microkernel and you can read the Torvalds-Tannebaum debate instead [1]

http://www.oreilly.com/openbook/opensources/book/appa.html


You're right, but sometimes modules != services

Your database is a service. You can't use multiple dbs depending on the situation.

If it can be replaced by multiple instances of the same module than yes, creating a microservice is probably stupid


>they are defined on source code level and can be freely used in different runtime components...

To me, components and services are two completely separate abstractions. A component is a collection of code -- a module -- that can be duplicated and used in numerous different systems (much like a 5 ohm resistor). Updating a module in one system doesn't affect any other external system. Services on the other hand I think of as multi-tenant systems that are used to provide common functionality that is likely to change.


Yeah, which just goes to show how microservices advocates understand programming.


> magine that each Python module runs as a microservice. For many modules this would lead to huge performance degradation, for example a regexp module can be called thousands times per second, the running time of a call is usually short and replacing an in-process call with a network call will give 100-1000x slowdown.

This is absolutely irrelevant. If your call budget is 400ms then extra 4ms that it takes fetching a data from a micro service is negligible. Make 400ms 4ms and you are done.


This leaves open the question of what are microservices. Are they of necessity completely isolated units deployed with Docker and Kubernates on a CoreOS cluster and communicating only via HTTP each with independent databases? Yes this seems a bit much for most projects.

There are midway options though. Even the lowly batch job is a good way to get some of the decoupling without having to go "all-in". I find batch jobs and message queues give me 80% of the benefit of "microservices" with only 5% of the pain.

In fact someone needs to write an article on "levels" of "microserviceness", (which certainly has multiple dimensions and branches) and point out the benefits and drawbacks of each level.

Of course the end game being: "a Docker container for each line of code."


> a Docker container for each line of code

Yes, this is the surest way for developers to guarantee the code itself is always bug-free. Everything becomes "just an ops problem".

2017 predictions: buzzy job title: Ops-Dev. Labview-style graphical programming http://www.ni.com/newsletter/app/largeimage?lang=en&imageurl... becomes the new hotness. Amazon and Google introduce new graphical programming workflow coordination services for lambda. Nothing ever gets finished.

(2018 predictions: google discontinues this service)

(2019 predictions: every machine instruction now belongs to its own docker package; the successor to Kubernates is marketed as a distributed asynchronous virtual CPU. Data centers begin requiring small nuclear stations to generate the electricity to power them. Still nothing gets finished).

(2020 predictions: someone rediscovers this "heroku" thing; things start getting done again. Hot new industry: nuclear waste disposal)

(2119 prediction: nuclear stations powering 2019's data centers are still running at full power but servers are all dormant; nobody wants to turn off a server "just in case").


"graphical programming workflow coordination services"

Those things are always sold to CIO types based on the idea that "you won't need developers" and then you end up with some eldritch graphical abomination where being being able to visualise things just makes it worse....

Draw a rough diagram and then write your code - fine, generate a diagram from code - fine, generate code from a diagram - nightmare.


For people who gawk at this LabVIEW screenshot: this is what actual LV production code looks like - the rule, not the exception!


In fact someone needs to write an article on "levels" of "microserviceness", (which certainly has multiple dimensions and branches) and point out the benefits and drawbacks of each level.

I think I might literally have a "SOA Maturity Model" slide laying around somewhere from my mid-2000's consulting career.

Let me know if you need it, ironically or not.


You're giving me flashbacks.

But the parallels are eerie. Someone gives a nomenclature to a style of architecture that many people were already using (just last week I heard that Netflix "invented" microservices), and suddenly the attention of the industry pivots. Technologies will be invented to ease adoption (yay!), but people will forget that this architectural style is one part of a solution for a particular type of problem -- it is not a solution for every problem, nor does adopting it mean all of your other problems will go away.

But consultancies will rise, debates will rage and conflicts over what truly constitutes a microservice will ensue.


conflicts over what truly constitutes a microservice will ensue.

So true. I feel like this has always been an issue in the "services" world. It's hard to nail people down on a definition that's both accurate and prescriptive. Either you end up with a definition that is so generic as to be unhelpful, or you come up with narrowly defined definitions that miss important use cases. I've decided that "services" is a word like jazz or porn or art: you know them when you see them.

My only wish is that we could, as an industry, find a way to better bootstrap our systems. We have a tendency to launch products / companies around bad ways of doing things and then spend a not inconsequential sum of money retrofitting everything.

It would be great to see tools and framework built from the ground up to address both developer productivity and scalability. It too often feels like achieving scalability comes at the cost of developer productivity... perhaps because by the time you need to scale, you've got enough resources to hire tons more developers and have them engage in menial tasks.


For sure there's a buzzwagon; and, true, microservices are just a kind of SOA (which itself wasn't new), etc.

What's different this time round is that microservice hosting should be much \cheaper\.

When you're cheap enough, all sins are forgiven.


I was just talking with a friend about batch jobs, and I was remarking that they can also be considered microservices. The difference is that batch jobs don't need to be interactive, so the contract is different.

Unfortunately it's a bit late for me to include this in the existing article, but it's a good topic for a next article. You will generally have a much easier time splitting tasks off and running them on workers than full-blown, standalone services (because the MQ can retry the tasks, they can be idempotent, you can have different timeouts, etc).

The usual definition of a microservice is a stricter subset of a task, in general.


Agreed. In fact batch jobs are, in at least one dimension, more decoupled than strict microservices.

Current client is a photobooth. Photos all can be shared from client app and web, each "share" is just a REST post containing "shareType:string" and "meta:json"; all share types (email, mms, twitter, etc) go through the same endpoint. Web api simply adds these to message queue for the corresponding "shareType" job to process. This way new types of sharing options can be added just by changing the client and adding the new batch worker; no updates to the web/api server app itself are necessary.

Proper "microservices" are usually described as being directly coupled to each other. They're just the new hotness so everyone wants a piece.

As most of the comments are saying, choose the shoe that fits.


> Are they of necessity completely isolated units deployed with Docker and Kubernates on a CoreOS cluster and communicating only via HTTP each with independent databases?

I'm pretty sure the Docker, Kubernetes, and CoreOS parts are unnecessary (as, IIRC, microservices started being discussed before those tools were available -- those tools were driven in part by microservices, but aren't essential to the model.)

HTTP is, likewise, unnecessary: its the most obvious protocol choice, but there is no reason that microservices must be HTTP-based -- you could have a set of microservices using just about any protocol you want.


And here again we will see the raise of the mighty, the glorious ENTREPRISE SERVICE BUS.

https://en.wikipedia.org/wiki/Enterprise_service_bus

Were are my mainframes ? I think I'll need them soon.


Maybe every module in a Monolith could also be considered a Microservice, with extremely fast networking and an extremely terse remote interface specification language.


Don't forget RPC with protobufs.


I think this article isn't very useful. It's unfortunate that we have this human need to blow things up and then deflate them. I dislike the term "microservices" because for most organizations all it does is plant a flag all the way across the yard from "monolithic." Suddenly the guys at the top have a new buzzword and the engineers are all trying to make their services as small as possible so they're micro enough.

Most of the gotchas the article mentions aren't logical consequences of decomposing into smaller services at all. You don't have to have different data stores for each service. You don't need to "marshal data" between services. If a service needs to call a service it's just a client like any other client, so if we want to call standard http request/response handling "marshaling" I guess it will sound more complex and scary. Breaking a monolithic app into smaller pieces doesn't increase complexity, it reduces it. And to the extent you have more things to monitor that probably means you can now monitor and control things that were more or less invisible outside the log data in the monolithic architecture.

More importantly, decomposing a problem into logically related areas of functionality that can execute separately allows you to make the most efficient use of compute resources, and it is consistent with the idea of favoring multi-processing over multi-threading. In almost every way groups of simpler things collaborating makes much more sense than large complicated things that do all. It's only when we create these Knights in shining armor that people start feeling like they have to be knocked off their horses. Use the tools and techniques that make sense.


Some of the weaknesses can be tempered by not using HTTP to communicate between the microservices:

- "slowdowns on the order of 1000%"

- " bunch of code necessary to marshal/unmarshal data [...] there are always dragons in there.

And also problems of versioning, data integrity, etc.

I've had those problems in a microservices architecture. That's things that are solved by protobuf[0]. Your servers exchange small, efficient structured data and you get tons of other benefits ({un,}marshaling for free, integrity, versioning, ...).

Potential downside: a language you want to use having no protobuf API.

Finally, I see another downside to the microservices architecture: it may be decided that the smaller, decoupled code bases should be stored in multiple CVS repos. Which turns into a nightmare: a single bugfix may span across multiple repos and there is no clean built-in way to links commits across them, you still should sync the interfaces (e.g. with git submodules), etc. This is a thing I've witnessed firsthand, and proposals to merge the repos were dismissed since "We [were] using a microservices architecture". Yes, it's a mistaken implementation of the microservices paradigm, but it still happens.

edit: I recommend protobuf not by preference over other equivalent solutions, but because it's the only one I know and have used. Alternatives are evoked below.

0: https://developers.google.com/protocol-buffers/


A network protocol is only a small component of a microservice in terms of affecting performance, by far the biggest difference is an in-process call vs network/IPC call. The latter is at least hundreds time slower due to how computers work [0].

I'm talking about a function call overhead only so if the actual processing takes more than a few milliseconds it stops being important.

[0] http://www.eecs.berkeley.edu/~rcs/research/interactive_laten...


I really recommend against using protobuffs. There are long standing bugs that Google just refuses to fix in the public version. I can't remember what they are off the top of my head, but I know a semi-prominent YC company that uses them and they pull their hair out all the time.

Just use zerorpc. It's more reliable than zeromq + protobuffs and it comes with a bunch of freebies, like built in heartbeats, streamed responses, etc.


How about Cap'n Proto? Their marketing is great, but I don't recall seeing many experiences with it, so I'm asking for feedbacks here.


First I'm hearing of it. Looks really promising, but without any experience with it I cannot really say.


They wrote it specifically for sandstorm.io, so they're dog-fooding it.


Take a look at GRPC (http://grpc.io). It addresses a lot of the long-standing complaints about using Protobuf as an RPC solution.


Having used protobufs, they are not magic. Sure they're faster to iterate and use than HTTP but not significantly so in many use cases.

They still marshall/unmarshall but they do a better job hiding it from you.

The network is still unreliable, insecure, slow, etc. and so will still have many of the same failure modes as HTTP. A better Corba is still Corba... Whether you call it XMLRPC, RESTful, protobufs, Thrift, Hessian or what have you.

And goddamn I hate the optionality of all fields that is "best practice" with protobufs. It makes processing any somewhat complicated data structure a PITA.


Yep, when evaluating whether to go with more microservices we looked into RabbitMQ for the transport and protobufs for serialization. In the end, we decided to roll the existing microservices into the monolith, which was by far the better decision, as we didn't need the scalability.


When you were evaluating AMQP, did you consider a lightweight RPC system like Nameko ?

http://lucumr.pocoo.org/2015/4/8/microservices-with-nameko/


No, we didn't get that far, but this looks very interesting, thanks for the link!


Instead of protobuf, we went with Thrift ( https://thrift.apache.org ) and have been happy with it so far. Our GoLang, Java, and Python components all use it to talk to each other.


I have a feeling even protobug is a lot slower than a monolith that can just pass data in memory. You're still opening network connections, probably to other servers, and allocating memory on the receiving end etc.


So I think there is a parallel to this with the whole "everything must be in tiny modules on npm" movement in the JavaScript community. If you do this, you end up with lots of repositories, a rigid separation of concerns but a network between you and your dependencies that will get hit a lot unless you wield some npm link sorcery, and a bunch of extra complexity.

A modular monolith application is what people have been writing since people thought up the notion of modules. Enforce proper discipline when building your app out and you won't need these physical walls between your functional areas.

I'm currently reading SICP, and the notion of using "block structure" in Lisp to compartmentalize and encapsulate functional areas of code is introduced in Chapter 1.

Get the basic stuff right before you start introducing complex systems to split up your software.


And there's probably a parallel to "one file per type" with tons of little types. It feels like you're making progress and doing stuff. You've not just done one task, you've deployed changes to 20 services. It also feels like you've made things more abstract than they actually are. It also feels more like Real Serious Programming.

I've used a Twitter API client (works fine) that implements itself across 7 separate libraries. All same project, they all work together, and they aren't used in any other system. Just separated things out for the fun of it. One library for Twitter client "Factories". One for Twitter "Credentials". Another for Twitter client "Security". Zero benefit to the user or to the project. But it certainly makes things seem more important, eh?


It totally makes sense to have separate libraries even for one project if they can be implemented by different people and tested separately. I'd shout for that design any time of the moon.


They can't be deployed separately. There's no point in splitting them up. Testing and development aren't affected.


Deployment is not the only thing that matters. Development is also important. I am actually working on an interpreter for a web scraping tool, where I'm working on the interpreter and my friend on the IDE. The likelihood of them ever being deployed independently is low but the separation is helping us focus.


Seems to me like the only concern with tiny modules on npm are development related. npm could improve the workflow here by specifying a single path to look up (autolink?) additional development dependencies rather than having to link them in each project one by one.

edit: And it seems like someone already thought of that and wrote a module for it ;) https://www.npmjs.com/package/npm-autolink


npm is the biggest nightmare of a package management system that I've ever come across.


What other package management systems are you using for comparison?


I know several: Guix, Nix, dpkg, yum, brew, pip, gem/Bundler, composer, package.el, and probably others that I'm forgetting. Most of those I listed are bad (the exceptions being Nix and Guix that actually do it right), but npm takes the cake. It somehow manages to create situations where applications are using many different versions of the same library, and where the same packages are duplicated in the hilarious mess that is the node_modules directory tree. Trying to package software written for Node in a systems package manager for use on GNU/Linux distributions is basically impossible because npm has created a wasteland where even the simplest software may have over 100 runtime dependencies, and then if you actually want to verify that their test suite passes you need 100 more "development" dependencies. jQuery's package.json will create a node_modules directory with 265 node_modules directories in it. [0] 265 separate pieces of JavaScript software to build jQuery from source! That is insane, but completely mundane in Node land. I had a discussion on their IRC channel about the challenges I've had trying to package software written in Node for GNU/Linux distributions, and was told that it's just the "Node way" and that it is better than everything else. Everyone just uses npm without knowing what's under the hood and moves on with their life, but when you actually try to untangle it all so that software written in JavaScript isn't tightly, irrevocably bound to a single package management system written by people that do not understand how to do package management, you find yourself deep in computing hell, wondering how everything went so damn wrong.

[0] http://dustycloud.org/blog/javascript-packaging-dystopia/


So basically, what you're saying is that its hard to package Node apps using the typical systems package managers in situations where you want to build everything from source.

I can live with that. The worst fate would be having to modify what npm does (replacing the package fetch with building from source). Given that npm is licensed under the artistic license, thats not half as bad as it sounds.

I do agree that there is a lot of unnecessary complexity introduced by today's JS build tools such as grunt and gulp. But npm isn't to blame about that - its only sin is creating and nesting duplicate dependencies, something which is already fixed by npm 3 (completely flat node_modules unless there are unavoidable dependency conflicts). The solution here is to avoid bloated packages such as grunt and gulp that try to install everything plus the world. (coffee-script grunt? seriously?)

One thing npm could do is separate buildDependencies from devDependencies. There are a lot of tools listed there that only pertain to testing, and testing a serious client-side JS library such as jQuery properly is a hard and complex endeavour.

Also, I can't help but notice the irony of complaining about the complexity of web application build tools given the complexity hell that are autotools :)


Let's add another problem to the list: The packages uploaded to npmjs.org are typically not source packages, but "binaries" (in that they aren't source). They often contain minified JS files and no test suite, making them effectively useless for anyone that wants to build from source. Of all the language-specific package hosting sites, PyPI is the best I've encountered as they do proper source releases.

Also, a package manager should not be a build tool. A build system should never require a particular package manager to be present. It's okay to say "you can optionally use this package manager to get all the dependencies needed to build", but it's not cool to say "you need this package manager in order to build."


npm does allow you to specify a repository url in package.json and warns when one isn't present.

I guess we may've gotten a little too happy with the centralised proprietary cloudy source distribution channel with a single point of failure...


One thing that is usually overlooked: do microservices fit your team structure? A team who spends all day together can manage a monolith very well, synchronous releases (everything-at-once) are not a problem. If you don't need 24/7, then it's even better.

However, if you're a distributed team (maybe across timezones), quick discussions are difficult and 'costly', then microservices might worth the effort. Managing the deployment and operations is more difficult but sometimes much less coordination is needed when people communicate through APIs and not Skype and Slack.


This. This is so often overlooked. Getting your architecture to roughly align with your team structure can do wonders, even if there's only a hallway dividing people. When a small team can iterate and evolve a chunk of code independent of anyone outside the team, it really unshackles them.

There are obviously patterns for doing this in monoliths (or, say, mobile clients - another type of monolith), but at some point you are bound by the dependencies inherent in a single runtime.


As others have already pointed out, you don't need microservices to achieve a good modularization of your code. Then have each team work on a different module, with clearly defined APIs between them. No need to separate them with HTTP calls.


The main benefit of microservices is that each team can deploy their service independently.

This is really useful to reduce the level of required of interaction (and pressure) between teams.


It may reduce development interaction, but it'll certainly turn testing into a nightmare.

If you have teams that communicate badly, you'll need people specialized in deployment and assigning issues to them. That's not a good situation anyway, but it's the less worse of them.


Again, this is something that is independent of microservices. If you have a modular mono codebase then one team can easily deploy commits that only touch their module.


I always though that microservices would be helpful if you had frequent transient programmers like contractors. Where you can onboard a new person faster by having them focus an a much smaller view of the system.


This phenomenon is just part of human nature. Same thing happened with OOP, design patterns, TDD, etc.

To apply microservices effectively, you should first build the monolith, modularizing at the source code level and adding choking points as needed. Over time, microservices will naturally roll off the monolith not unlike boulders rolling off mountains after rain or earthquake. Don't go dynamiting in anticipation.


You don't necessarily need to build the monolith - design it first, for sure, but then pick apart the design and see where the individual components lie.


Right. And I'm saying most folks either don't know how to do that or can't because of constraints like time and fluidity. Design one starts and ends up with are more often than not miles apart.


Having read one of the success stories linked in the beginning (http://philcalcado.com/2015/09/08/how_we_ended_up_with_micro...) I think I am starting to get it. That was a rather candid article describing a team which after having got burned by a sprawling "monorail" architecture decided to split it up into services (nothing micro- there) based on organizational viewpoint (if there is some knowledge about a part of application shared by 3-4 engineers it makes sense to split it into a separate module so it can be developed faster). And as I am inferring from the pictures their services don't call each other much so it is really close to a modular monolith. So if "migrating to microservices architecture" really means "splitting that damned monorail into something more manageable" then it is a really good idea after some point.


I can't help but think that much of the intent behind "cargo-culting" is simply people building up their resumes for future career development.

If you want to work in a sexy new technology, but you need to develop experience in that new stuff to be marketable it is totally understandable to try to build up skills by forcing the implementation of over-sized solutions.

In other words, many employers aren't willing to take on folks if they don't have the requisite experience on some new stack and that compels folks to gain that experience anyway they can, including "cargo-culting" stuff that isn't necessary just for the experience gain.


That hasn't been my experience, from the people I've seen. It's mostly just excited engineers wanting to play with a new thing (which is great) but doing that in the most available place, their job (which is bad).

The advice I give them is "do whatever you want in your house (or your side project), but critically evaluate your business needs and only use what makes sense for your business".

Personally, I have a very low-traffic guinea pig side-project that I like working on, and I just try every new thing there.


Well, yes, but I would say that the desire to "play with a new thing" is very much tied to career development.


The people I know aren't doing it in a conscious way to further their career, they just like playing with new things. The career advancement is incidental.


I'm willing to bet that it is a bit of both.


I hear this often. I've rarely witnessed it.

Generally when I hear somebody dismiss a suggestion because they think its real purpose is career progression. I am immediately suspicious of the accuser.


I've worked with people who explicitly said that the reason they wanted to go with technology XYZ was because it looks good on a resume.


I've worked with people who technology choices based on how good it would look on their CV and people who have refused to work in particular areas because the technology wasn't sexy enough for them.


The article is a little weak, but well worth the read.

I love the microservices concept, but fair warning: as bad as OO has gotten over the past 20-30 years, microservices promise to be even uglier.

Why? Because not only are you mucking around in the code, you're also mucking around in how everything connects to everything else in your cloud.

Just like we saw vendors come out with click-and-drag ways to create new classes, now we're seeing vendors start to sell "pre-finished" microservices. Get the disk out of the box, boot it up, fill out a couple of forms, and voila! Now you have microservices.

That worries the living crap out of me because microservices are the architecture of the future. You just can't get from here to there using a magic bullet. Learn you some pure FP, make everything composable using the Unix Philosophy, and keep your LOC to a bare minimum. Toss off every damn thing you don't need.

As much as I know they are the way forward, I have a bad feeling that consultants will have plenty of billable time coming up straightening out a lot of messes.


Rule of thumb: Divide number of full-time backend engineers by 5 and you get ideal number of microservices :-).

Too many microservices is a complexity mess, too little means you have a monolith that is hard to iterate on.


Looks like I can afford half a service, then :P


Looks like you can afford to have monolith and don't worry about this microservice thing :-P.


I don't agree that scaling up is an unqualified advantage of microservices. In practice you have to co-ordinate communication between your services. Though some of this communication will be asynchronous publish-subscribe (ie: speed unimportant) some communication will necessarily be synchronous.

If one heavy part of your rails app takes up 90% of the processing time, there is nothing wrong with just getting a bigger machine for the whole app. The bigger CPU/memory/whatever will be spent on the heavy part and the rest will be normal.

For most business, scaling is not a problem - they can just get bigger machines. Having to re-implement transactions across your microservice architecture really is a problem. Very often transactions need to cross microservice boundaries and that really requires a lot of thought


>> that really requires a lot of thought

Yes, sure, but that's the resources spent right. Not the monolithic spaghetti crap you waste time on trying to figure out what went wrong - 99% businesses everyday activity


Spent right? Implementing a protocol to coordinate transactions (if it is possible at all, depending on the various backends you work with) to get what you could have already done in your database?


You can keep transactions in one part of the system and the queries in another. CQRS as one example.


There are other options, like spend the same resources building a monolith that is not spaghetti. And it is certainly possible for the sum of microservices to become spaghetti.


I'd take a stronger line than this. Microservice architectures scale worse than monoliths. Because with a monolith, you can just do this:

http://paulhammant.com/2011/11/29/cookie-cutter-scaling/

Moreover, unless you're doing something intrinsically computationally expensive (video transcoding or whatever), or you've screwed up, your bottleneck will be in the database anyway. Scaling the database looks exactly the same for monoliths and microservices: you can scale up, scale out, or split.


It is slightly different, in that it is already a sharded database compared to what the monolith would have.


If you've built your microservices on top of multiple databases, then you could have built the same features in a monolith on top of multiple databases. What helped scaling is splitting the database, not splitting the application.


I see the strengths and weaknesses in the article, and the complaints from all the comments here, but I still find the trade off of microservices worth it. It requires sophisticated ops and well defined deployment tools and dev environments, but we have to handle ten billion requests a month on our stack. The ease at which we handle that scale, and the speed at which engineers get to iterate and deploy makes microservices all the more worth it.


That's exactly the problem, though. You would find the trade-off worth it if you have ten billion requests a month. The problem is that people do it while only needing a million requests a month.


If you used those sophisticated ops and well defined deployment tools and dev environments to develop a monolith, wouldn't it be just as easy?

As for scale: are you suggesting that having extra code executing and network calls being made in your system makes it more scalable, rather than less?

Ten billion requsts a month is 3805 requests per second on average; i'd guess that means 10 000 requests per second in the peaks (correct me if i'm wrong!). Is this considered challenging scale today? I'd buy four DL380s and call it done.


>If you used those sophisticated ops and well defined deployment tools and dev environments to develop a monolith, wouldn't it be just as easy?

No, it wouldn't allow us to be as fast. Most of our services are under <200LOCs (not a policy, just happens to be the point where people seem to split things out). The idea is that any service can be rewritten completely in a few days.

There are no tie ins to any platform, compiler version, syntax, or language. This might sound like chaos, but it's a huge productivity gain, as I feel full ownership over features I write. Naturally, we aim for good docs and code coverage, and use continuous deployment and integration tools to keep everything green.

As for scale, any microservice can be run across any number of instances without having to scale up the entire platform. This allows us to identify hot areas and deal with them effectively.

We don't use network calls (well, not HTTP or TCP) to communicate between services. Services themselves are pretty transport-independent and work well over tcp, but NATS is the transport of choice at the moment for inter-service communication.


> Data segregation: Since all your data now lives in different data stores, you’re responsible for relationships between data. What would be a simple cascading delete in a monolith is now a complicated symphony of dependencies, calls and verifications.

IMHO, this is the biggest problem with microservices: "Transactions" are not available in a microservice environment. You'll have to work really hard to get anything that comes close.


That sounds more like one example of a suboptimal design than a valid indictment of whole paradigm.


The idea of a "simple cascading delete" is scaring the cr*p out of me.


I think a cargo cult also has something to do with signaling, sort of like a status symbol ("They can't really be Y if they are not X!" -> "Look at us how X we are!"). It's a self-reinforcing meme that is used as a heuristic for value estimation, but usually fails catastrophically because of its heuristic and self-reinforcing nature.


I think that's more a result of the kind of person that adopts a cargo cult mentality.


What do you mean exactly? I mean that the status symbol aspect of a cargo cult is a crucial part of its self-reinforcement mechanism.


I don't get the trend to split up everything as micro as possible.

Use a proper framework like Symfony (or if, like many people, all you want is a CMS, Drupal) supporting MySQL master-slave or multi-master replication and separation of web frontend and file hosting, host it on AWS (or plain old dedicated servers), put in Cloudflare if you're scared of DDoS kids, and be done. If you need SSO use either provided SSO plugins or an LDAP backend if the SSO is only required for various platforms provided by you.

Said architecture can be built and run on a single server and if you're dealing with spikes you just spin up a couple frontend servers and be done.


I don't think you get it... In most cases we are talking about applications beyond the complexity of a configurable CMS. Also assuming that replication solves your (potential) database bottlenecks does not account for write-heavy scaling.


Many people (and especially consultants) over-exaggerate the customer's demand for a complex system.

I have seen people shipping SAP to small brick-and-mortar stores with a tiny webshop...


Agree with the premise. An excellent example premature optimisation or YAGNI. An alternative is to factor your code by business capability / bounded context as microservices endorses. Factor the code as such but don't deploy the logical partitions as physical ones.

Keep it all in one deployable artefact, in-process for as long as you possibly can. Use an in-proc message bus first, don't dive into Rabbit until you know you need it. As soon as you require that infrastructure cost for http, mq, monitoring a ballooning of boxes / VMs, deployment complications you'll notice the spike in operational expenditure.

Grow you architecture organically.


If I learned one thing in Software Engineering it's "modularization matters most". And microservices seem to be the epitome of this concept.

If you have to work with different people, you need a way to minimize dependencies between them.

Also, the more encapsulated things are, the less the starting skill of a person matters. You just need people who get things done. Later you can switch out the bad modules easily. Which is a huge economic factor.

I can't count the hours I spent with fixing horrible monoliths and the years it took to replace them.

But if there is a horrible microservice, you can do this in a fraction of time.


Sadly, often the reason you need to work with different people is that you've architected your product to be so complicated that it needs lots of guys working on it.

It's amazing how much a team of one can do if you don't saddle said team with arbitrary complexity such as a microservices architecture. Maybe you'll need to scale to that level one day. But you'll definitely want to ship. One guy and a sane architecture can do that.


TL;DR Microservices have their place, and can be useful for certain environments, but they are not a fix-all.

They can be pretty nice for multi-tenanted development environments. Sure, you could use any of the other isolation techniques, but being able to provide an environment that can be started quickly (and somewhat easily depending on the rest of the services required). Not to mention that the popularity of container systems and their ease in understanding (Dockerfile vs RPM spec) means that other people can hack away at the dev environment without having to know the ins and outs of building proper packages (although they should learn).

Now, for a production environment, I would never move to a microservices architecture for the reasons listed in the article and my own dislike for adding overhead and complexity to solve "issues" that can be easily dealt with using tools that have existed for years (proper packaging with dependencies etc..).


I agree with the YAGNI-ish approach, but talking about micro services as if they provide modularity is entirely off-base. The decision to use micro services should be driven by scalability, not modularity. If you're saying that it's going to be terrible to make changes to your codebase, simply bolting micro services on top of that is going to make things worse.

A well-designed micro service architecture is modular in that each micro service is basically a nice wrapper around either a query or an update. But you can organize your application into an API of queries and updates without micro services.

To be honest, if you don't at least intuitively understand this, you have no business architecting a production system large enough that this matters.


Many of the advantages of microservices can be achieved by refactoring your monolith code to be less monolithic.

I would suggest using functional styles wherever possible, plenty of isolated unit testable code, and a hexagonal architecture http://alistair.cockburn.us/Hexagonal+architecture that pushes all the I/O, mutation, side effects, etc. to the very boundary of your code. Also see Gary Bernhardt's "Boundaries" talk for more interesting thought in that vein https://www.youtube.com/watch?v=yTkzNHF6rMs


There are also a distinct lack of tools for debugging co-ordination and scheduling problems in a micro-service (or as they used to call it in my day, Service Oriented Architecture) system.

In an asynchronous RPC scenario, does Microservice A listen for the appropriate response message from Microservice B before continuing work on Request X99? Does it respond to all messages in the appropriate order? What happens in a cascading failure scenario when the back-end system Microservice B relies on is taking too long due to bad hardware/burst traffic/DDOS/resource contention?

Do you have tools that can analyze your program for critical sections where you need explicit locking and ordering mechanisms? Do you have analysis tools that provide guarantees that your fancy distributed architecture is complete/correct?

These are just a sample of the things OpenStack has to think about -- a micro-service architecture for managing, orchestrating, and authenticating access to data-center resources. It's a hard, hard problem and an on-going effort by thousands of well-paid engineers across the globe to get right.

I have no doubt that a small team of talented developers could stand up a system of APIs around their core services to get a system running. However I can guarantee that they will be making huge trade-offs in terms of correctness and reliability.

At least with a monolith (is that a pejorative?) application you do have tools to analyze and debug your code that work well and have been battle-tested for a couple of decades. I suspect you would produce fewer bugs if you were constrained for developer talent and time.


Micro service architecture is good to evolve a monolith project who need to scale when dealing with a huge amount of calls. It's great to be able to experiment with different implementations and technologies, do A/B testing. It enforces to have single responsibility modules at architecture level avoiding bad practices if you are dealing with different/remote dev teams.

You have challenges, though. One of them is when implementing micro services you need a cultural change in your business to be able to adapt to the change. You need to deal with more complex architecture, you need to implement your own solution to deal with the architecture, spend time defining a dev ops culture if there is none, ...

Businesses are usually pretty different between others so you can not expect to have the same solution to deal with your problems (For example, using Netflix approach as a silver-bullet solution).

I've heard so many times the concept "micro services" as the goal as same as "big data" as the solution. Again, we should analyze what is our problem and what we want to solve before selling the new shiny thing and making things over complicated.


Do you think you can do microservices without devops? I see the correlation you're making but I think a more traditional organizational structure can work fine.


I don't think it's impossible but harder. As far as you have great collaboration and communication between teams probably you can achieve the same.


Martin Fowler identified a lot of the same tradeoffs in this video: https://www.youtube.com/watch?v=2yko4TbC8cI

One benefit I haven't seen mentioned yet: microservices are effective at reducing the mental "page size" when working on any particular part of the system.


I think some of these points are gross exaggerations.

> You immediately increase the things your servers have to do tenfold.

Really? It's ten times as much work to implement microservices?

> Personally, I’ve seen slowdowns on the order of 1000% when moving to microservices (yes, ten times slower).

Then you implemented your microservices wrong.

I think that the author's understanding of the goals and purposes of microservices is maybe a bit misguided. Microservices are about front-loading scaling problems, not about having a clean architecture or smaller codebase. If you never need to scale, you don't need microservices (but you're probably wrong).

The flowchart at the end of the post really underscores for me that this author's argument is not genuine. He holds up this shibboleth of a "monolithic" architecture, something that doesn't really exist in 2015.


> Really? It's ten times as much work to implement microservices?

No, it says the _servers_ have to do tenfold more work, not _you_ to implement them. Whether that's correct or not is another discussion.


The application I work on most of the time is largely monolithic and usually I have no problems with that. Some parts have been extracted to their own codebases and are deployed separately because of performance reasons.

We also separated the main backend/API codebase from the frontend mostly because the frontend devs work prefer to work within the Node ecosystem instead of Python/Django and so that we don't have to think too much about synchronizing deployments. The tests for the backend code take quite long to run as well compared to the frontend tests, so having this separation is nice for the frontend devs that way too.

What I some times would like to have better infrastructure support for though is throwaway prototypes/projects that can live in their own codebases and have access to all the regular databases, blob storage and so on, as well as databases that are private to the prototype that I can do whatever with with no risk of doing something bad to the important databases/storage.

I would also like these prototypes to be able to register themselves with the load balancer to take care of everything under `/halloween-experiment/` for example and have the load balancer add headers like `X-UserEmail`, `X-UserID`, `X-IsEmployee`, and so on so that I don't have to implement authentication/authorization in every prototype.

Today these types of prototypes need to live next to the "important" code so that they can use the same CI pipeline and easily be made public or visible to employees and use real data.

I'm following projects like https://getkong.org/ with interest, and together with everything happening around Docker such as EC2 container service or Kubernetes, as well as projects for service discover/configuration like etcd or Consul, it feels like we're getting there. There are just so many projects to keep track of, and you need to figure out how to make them all part of your CI pipeline. :)


I love his little worflow diagram embedded in the article: http://media.korokithakis.net/images/microservices-cargo-cul...


I wonder if the box on the right should read "You need automated regression tests"


It should read "split up your monolith".

Just because one extreme isn't working for you doesn't automatically mean the other extreme is the right solution.


If you have such a mess that you can't change anything, regression tests help, but they're still a band-aid. It would be better to separate things out, but yeah, the answer should really say "oh my god yes we've bought all the hardware we can and it's still slow!".


Not only that, the microservices is just SUN-RPC, CORBA reborn and we all know how they worked out.


Develop as a monolith, deploy as services: engines.

http://tech.taskrabbit.com/blog/2014/02/11/rails-4-engines/


"As with everything, there are advantages and disadvantages"

The author focuses on microservices, however, I think there is a larger point to be made. It is not that some particular architectural pattern is bad or good, it's that when you don't fully consider the requirements of your application and apply some pattern or technology just because it's the hot item this week you are going to end up with problems. This has less to do with microservices, in my experience, and more to do with less technical managers making decisions for a project when they don't fully understand.


Damn! Now what is left as the next big thing?


Whatever was the next big thing after SOA! Just look at the Slashdot archives for 1990, we can probably figure it out quickly.


A micro-services service-directory micro-service?

Maybe we should call it MDDI?


A docker container for each line of code.


How about Service Data Objects, as implemented in Meteor :-)


At my job we've avoided microservices thus far. 90% of our deployments are just to Heroku. Every now and then I lament in my mind that we aren't using the coolest new tools (Docker, microservices and all the things that come with), but what we have works really well, and we can easily scale up by 10x and things will still work.

Every time I think of the mess that it will cause to break up things to microservices, I'm glad we aren't doing it- yet. When the time comes, we'll roll out to services as-needed, but that day isn't today.


The problem with microservices for us has been the composition of operations. Yeah we use the Rx* observable patterns and it helps but the code is still non intuitive for new developers if the language is pretty much procedural/imperative. Even with languages like Scala it still gets confusing. Even if you have a language where threads are cheap (golang) you still have to compose the operations.

I have been meaning to see if there are microservice like frameworks for Haskell similar to Hystrix (which is what we use).


Yet another article that misses a huge aspect of microservices: Reusability. (I'm going to borrow from an older comment [1] here.)

Almost all of the swathe of microservices we've developed internally are general-purpose. We've built a dozen or more user-facing apps on top of them. If I wanted to build a new app today, I would typically sit down and write a Node + React app, configure some backends, and I'd be done. I don't need to write a new back end because I can just call our existing services.

If you look at what a modern web app is, most apps these days are actually stupidly similar. They typically need things like:

* User accounts

* Authorization with existing OAuth providers (e.g. Facebook)

* Some kind of database to store and search structured content

* Notifications (email, text, push)

* Storing images or video

* Syncing data from external sources

* Analytics

We have generalized, reusable microservices that do all of this.

Let's say I want to build a HN-type link aggregator with comments. I will use our document store to store the links and the comments in a nice hierarchical structure. I will use our login microservice that mediates between an identity data model and an OAuth account registry. I can use our tiny microservice devoted to recording up-/downvotes. I can use our analytics backend to record high-level events on every UI interaction.

I can write this without a single new line of backend code.

This ability to "pick and mix" functionality you need is the real, largely undiscovered beauty of microservices, in my opinion. It's the same idea that makes AWS attractive to many people; you're building on the foundation of thousands and thousands of work and reusing it.

We just whipped up a new site recently where 95% of the work was purely on the UI, since all the backend parts already existed. The remaining 5% was just code to get data to the system from a third-party source, plus some configuration.

Reusability requires that you plan every microservices to be flexible and multitenant from day one. It's a challenge, but not actually a big one.

Is it possible to do this monolithically? Sure. I would be afraid of touching such a beast. We have very few issues with code devolving into "legacy", for example; the strict shared-nothing APIs ensure that half-baked, client-specific hacks don't sneak into the codebase. If anything messy happens, it happens in the client app, and that's where it should happen. Eventually you'll throw the app away, but the backends remain.


Of course microservices are just another tool in the toolbox.

I think what's frustrating is the lack of support in moving from a monolith to a microservice architecture. I haven't built a lot of them myself, but it feels like you're rolling your own framework/architecture whenever you need to make the transition. Is that anyone else's experience, or is it just not possible to codify best practices?


Currently in the process of separating several monolithic applications into services. We've found success in identifying individual "resources" that are present and common in each of the monoliths. Once identified, we implement that resource end-point in our service and swap out the existing code with calls to the service. Divide and conquer.


The "micro" in microservices is the issue. It reminds me of the "No-SQL" movement. The truth is that EVERYONE has a multi-tiered architecture. The only question is how many tiers you need. It's always more than 1.


I think one thing that microservice and some other models correctly point to is that "tiers" of the traditional 2-tier, 3-tier, and n-tier architectures aren't universal, and that their are good reasons for some systems to have components that are arranged in some manner other than stacked, front-to-back layers.


If you design your software with a bad architecture you will have problems, if the services and their data are 'cut' in the wrong way you will get performance (and other) problems.

That's valid for functions, state, api and service stores.


The thing that nobody addresses and especially not the micro services gurus is how do you know where and what to split into micro services.

When does splitting a service add enough value that it is worth the cost of performance and added complexity?


We should grab lunch to chat about this.

Simply put, you split a microservice when you need to split teams.

Microservices aren't a solution to a technical problem, they're a solution to a social/organization problem (described by Conway's law).


I think it is fascinating how an idea can emerge with a fuzzy meaning and, in the space of 2 years, become rigidly associated with a narrow set of technologies which will surely be much more temporary than the idea itself, thus forcing people, after 3 or 4 more years, to come up with a new word for roughly the same idea.

In the summer of 2013 I was working at Timeout.com and we were trying to reinvent the architecture of the site. Timeout.com had spent several years using the PHP framework Symfony to build a massive monolithic CMS, and the thing was a disaster. It was shockingly slow. If you ssh'ed to inside the datacenter and then tested the response time of the system, under ideal conditions, from one computer in the data center to another computer in the data center, then the average response time was 10 seconds!

This lead to a long internal debate. I advocated for what I called "An architecture of small apps", because at that time none of us had ever heard the word "microservices". I did not hear that word until March of 2014, when Martin Fowler wrote his essay:

http://martinfowler.com/articles/microservices.html

But back in the summer of 2013, with permission, I published the whole internal debate that we had had at Timeout.com:

http://www.smashcompany.com/technology/an-architecture-of-sm...

You will notice that you don't see the word "Docker" in my essay, nor do you see it in Martin Fowler's essay. And in my essay, I suggest we use ZeroMQ to bind our apps together.

But 2 years after we had our internal debate, I've noticed that more and more people now associate "microservices" with a very specific set of implementation details: Docker, Kubernates, HTTP and Service Discovery.

I acknowledge that these 4 technologies can be combined in very powerful ways. I currently work at the startup incubator run by NYU, and I get to eavesdrop on what the folks at lsq.io doing, since they sit next to me. And I get that Pelly is a frighteningly smart guy doing extremely cutting-edge stuff. I totally admire everything they are doing.

However, I personally feel that I'm following a microservices strategy, and yet what I'm building is still a lot like what I described in my essay of 2013.

July 30th, 2013 http://www.smashcompany.com/technology/an-architecture-of-sm...


Example of micro services that just work (tm): Internet.


monolithfirst.com


monolithfirst.com

fight the future




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: