Hacker News new | past | comments | ask | show | jobs | submit login

I was an early preacher of microservices. In theory, it was great. Independently scalable, rewriteable in any language, and of course a fault in one piece won't bring down the others.

I learned the hard way what it really meant. A fault in one affects the others. What should be a simple bug fix requires changing code and deploying in multiple applications of various states of disrepair. A true nightmare.

In large enough teams for applications under development, I still think microservices are a good choice. But for myself and my small team, monolith applications are so much easier and faster to maintain.




Microservice design (or rather, any good distributed design over multiple factors such as time, location, knowledge, structure, QoS, programming language) is dependent on the following criteria:

- Orthogonality: service X is independent of its requirements of service Y. Your mail client is not a web browser. Your payments platform is not a web-shop.

- Programming against interfaces. Your interface should be relatively stable over iterations, while your implementation can change.

- Comprehensible interface by short-term memory. This means the service/component does not entail more than, say, seven ‘items’. For example, an authentication service should not have more than seven concepts (token, credentials, state, resource owner and so on)

- Related to orthogonality: failure of this service should not entail (immediate) failure of another. This is akin to how the Arpanet was designed.

- No singletons. Services should not be designed in such a way that only one, and exacly one is running.

Follow these guidelines, and micro-service design becomes manageable and scalable.


You say you were using microservices, but you say that a bug in one service led to redeploying many services. That sounds really odd, and like you weren't doing microservices at all, but actually just a monolith with many pieces.

I think "Can I deploy a bugfix without redeploying the world" is one of the base criteria for determining if you have implemented microservices or not.


> You say you were using microservices, but you say that a bug in one service led to redeploying many services. That sounds really odd, and like you weren't doing microservices at all, but actually just a monolith with many pieces.

But the problem with microservices is that there is nothing to prevent you from creating these types of potentially incompatible dependencies. If you are separating functionality using functions, the compiler helps you spot problems. If using classes or libraries, the linker can help spot problems.

With microservices, there aren't good tools for preventing these types of problems from arising. Maybe one day Docker, kubernetes, or something else will make it easy to create and enforce real boundaries. However, as long as microservices are just an "idea" without a set of tools that help you enforce that idea, it's very easy to introduce dependencies and bugs in ostensibly "independent" microservices.


> But the problem with microservices is that there is nothing to prevent you from creating these types of potentially incompatible dependencies.

Sure, it's totally a matter of idiom - this is why I stated that a big problem with microservices is people jumping into it thinking it's SOA. Microservices, as a discipline, requires some care. It could be argued that the care required is too high to be worth it though.

> If you are separating functionality using functions, the compiler helps you spot problems. If using classes or libraries, the linker can help spot problems.

Maybe you could be more specific? I don't see how a compiler will help prevent coupling of modules/ internal components. What problems are you referring to?

I agree about microservices being an idea without an obvious implementation - that's a fair criticism.


>> If you are separating functionality using functions, the compiler helps you spot problems.

> Maybe you could be more specific? I don't see how a compiler will help prevent coupling of modules/ internal components.

A super simple but effective way compilers prevent bad behavior is by preventing circular includes/imports. In a programming language, if module A imports module B, then module B can't easily import module A.* The compiler/interpreter will complain about circular/infinite imports. This general rule motivates developers to organize their code hierarchically, from high level abstractions down to lower level, and militates against inter-dependencies.

In contrast, there's nothing to stop microservice A from contacting microservice B, which in turn contacts back to microservice A. It's so easy to do, that many will do it without even realizing it. If you're developing microservice B, you may not even know that microservice A is using it.

Designed correctly, microservices can be another great way to separate functionality. The problem is actually designing them correctly, and keeping that design enforced.

* Sure, there are ways of effectively doing circular imports in programming languages, but they usually require some active choice by the developer. It's hard to do it by accident.



I don't think this is a "No true Scotsman" situation. I didn't defend microservices on the premise that "Good" users will do it right, I stated that what is described is not microservices.

Similarly, if I build a microservice architecture, and then say "My monolith is too hard to manage because it's in so many pieces" I think it would be fair to say "You didn't build a monolith".


> I stated that what is described is not microservices.

You're sliding the definition of microservices over from the obvious one (many small services[1]), that's what's a 'No True Scotsman.'

With that said, I don't think you necessarily committed a fallacy, it's just a matter of phrasing. "You weren't doing microservices at all" is the fallacy, but the underlying message is sound: It's not enough to split a monolith into services, you also need to spend some time architecting the services to play well with each other.

But I think it's unhealthy to say "you didn't do multiservices" instead of "this issue isn't inherent to multiservices, you can overcome it," because the former sets up multiservices to be a silver bullet.

[1] We can expand this definition to how Fowler and the like define it and still run into 'split monolith' problems (dependency issues being the biggest in my mind).


I disagree about the definition sliding. SOA is closer to "many small services", Microservices is SOA + a set of practices and patterns. You can do SOA and not be doing microservices.

It sounds like what they did was attempt to split a monolith into separate services, which is distinctly different from a microservice approach.


Maybe that microservice needed to change its method signature because it was missing something.


and so you don't have a microservice, but just code that's spread out.

a feature of microservices is a way to add new apis while retaining old apis (for compatibility). So you'd write the new signature, and deploy. Everything still works fine. Then the other services can be slowly updated to use the new signature, one at a time.


What if all the other services using the old API are doing the wrong thing because the old API was wrong? The bug isn't fixed until they all use the new API.

As an extreme example, just to highlight, what if you have a login service where you only take the username and hand back an access token. This has the exploit that anyone can get super user access by simply passing the right name. So you patch the login service to take a password as well.

But the exploit is live until all the other services is using the new API... so wouldn't you want to prevent the old API to be used, breaking any service that hasn't been updated yet?


If the interface is so wrong that the implementers actually can't use it safely, that's not a microservices problem any more than it's a monolithic architecture problem.

It's important to design interfaces before they are implemented everywhere. And the D in SOLID stands for Dependency Inversion, I think it applies here. It asks:

When you design a dependency relationship between a big (concrete) thing and a bunch of little (concrete) things, which way should the dependency relationship between them be arranged? Should the big thing depend on the little things, or is it the inverse?

There might seem to be an obvious right answer, but the way I phrased it is deliberately deceptive because according to Dependency Inversion the answer is neither. You should have an abstract interface between them that both depend on, as I understand it.

(and I'm learning, I got my CS degree almost 10 years ago but just now finding out about these things, so nobody feel bad if you didn't know this... and anyone else who can explain better is also welcome)

This principle is important here because keeping interfaces simple and abstract makes it easier to spot issues like this one, that should never make it into production. An authentication interface that only takes one parameter username is something that would never make it past the first design review meeting, unless it was a specification buried in a majorly overcomplicated concrete implementation (or if they didn't look at the interfaces at all).


> If the interface is so wrong that the implementers actually can't use it safely, that's not a microservices problem any more than it's a monolithic architecture problem.

Right. My point was that there are code bugs and architectural bugs, and from what I can see microservices only really help with the first of those.


Then you redeploy everything. But you're making that call based on risk, not due to a technical requirement.


Yeah, unless it is a bug like I said. Maybe you forgot that you need to save X as a well and that all services that use it must pass X otherwise the data saved is invalid.


This will only be true if the bug fix had local impact.

If instead the discovered bug was more fundamental in nature (i.e. a problem was found in the actual design/implmentation of the signature/api itself) then every service using that api will need to change.


So you're making the same change but taking longer to do so?


I think that's fine; if a user action depends on a service being up, and it's down, the whole system appears to be down to the user. There is no way to make an unreliable component reliable without taking it out of the critical path for a request.

Consider a case where you have a bunch of apps that generate PDFs. There are two ways to structure the system; either every application bundles the PDF-rendering logic (i.e., a copy of Chrome and its dependencies) and needs to be provisioned to run it, or you have it as a standalone service that each application calls for its PDF rendering needs.

There are advantages in either approach. First consider the monolithic approach. Say the "foo" service finds some crazy edge case that causes your PDF renderer to use all the RAM on the machine it's running on. The foo service obviously goes down. But the bar service, that doesn't have data that triggers that edge case, remains up, because they are isolated.

If you have the PDF rendering functionality out in a microservice, then foo's bad behavior prevents bar from rendering PDFs, and both applications appear broken for anything that involves generating a PDF. But of course, only the PDF rendering functionality is affected. If foo is in a crash loop because its built-in PDF renderer is broken, it can't do anything. If it's just getting errors from the PDF service, everything else it does still works, so that might be better.

You also have to consider the maintenance costs. The PDF renderer has a 0 day security problem. How do you update it in all 100 of your applications? It's going to be a pain. But if it's all behind an RPC service, you just update that one service, and all the other apps are fixed. The downside, of course, is what if you want some experimental new feature only in one app's PDF renders, but not in the others? That is harder to do when you only have one version of the renderer deployed globally; if each app bundled their own, you could safely upgrade one app to the experimental version without affecting all the others.

So my TL;DR is: it's complicated. There is no one true answer, rather there are multiple solutions all with separate sets of tradeoffs. I generally prefer microservices because my development strategy is "fix a problem and forget it". It is easy to do the "forget it" part when new features can't interact with old features except through a battle-tested API. Your mileage may vary.


Side question: so you found the best way to generate PDFs was Chrome? I’ve recently looked into this and seems like the best approach, renders nicely and can use html etc, but the fact that it has to spawn an external process irks me a bit.


I ended up using Puppeteer, wrapped with a node app that translates gRPC requests containing the various static files, returning the bytes of the rendered PDF. I did not dig fully into figuring out the best way to deal with the Chrome sandbox; I just gave my container the SYS_ADMIN capability which I am sure I will someday regret. Full details are available here: https://github.com/GoogleChrome/puppeteer/blob/master/docs/t...

I see no reason not to open-source it but I haven't bothered to do so. Send me an email (in profile) and I'll see to it happening. (It is easy to write. All the complexity is dealing with gRPC's rather awkward support for node, opentracing, prometheus, and all that good stuff. If you don't use gRPC, opentracing, and prometheus... you can just cut-n-paste the example from their website. My only advice is to wait for DOMContentLoaded to trigger rendering, rather than the default functionality of waiting 500ms for all network actions to complete. Using DOMContentLoaded, it's about 300ms end-to-end to render a PDF. With the default behavior, it's more than 1 second.)

Before Puppeteer I tried to make wkhtmltopdf work... but it has a lot of interesting quirks. For example, the version on my Ubuntu workstation understands modern CSS like flexbox, but the version that I was able to get into a Docker container didn't. (That would be the "patched qt" version in Docker, versus the "unpatched qt" version on Ubuntu. Obviously I could just run the Ubuntu version in a Docker container, but at that point I decided it was probably not going to be The Solution That Lasts For A While and we would eventually run into some HTML/CSS feature that wkhtmltopdf didn't support, so I decided to suck it up and just run Chrome.)

The main reason I didn't consider Puppeteer immediately is that Chrome on my workstation always uses like 100 billion terabytes of RAM. In production, we use t3.medium machines with 4G of RAM. I was worried that it was going to be very bloated and not fit on those machines. I was pleasantly surprised to see that it only uses around 200MB of RAM when undergoing a stress test.


I have a c# lambda in aws for taking screen grabs and PDFs of pages. If the service is running and hasn’t idled out it takes ~2 seconds. Takes about 8-15 on first run. Sometimes I’m willing to accept.


There’s CEF, which is effectively Chrome as a library (it’s one of the targets for the build process E.g. Mac, windows, iPhone, CEF). There are various projects that then build on top of it, like CEF python.


I know it's condescending to point out that "you are doing it wrong", but in this case, it really seems like your microservices implementation was way off. How come one microservice coming down affects the others?


A single microservice going offline?

Or, an existing API contract is broken, and the fix has update requirements for consumer services?

Maybe that's microservice the "wrong way" but I sure see a lot of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: