Hacker News new | past | comments | ask | show | jobs | submit login
Front-End Microservices (zalando.com)
91 points by kostspielig on Dec 7, 2018 | hide | past | favorite | 32 comments



I've worked on a product that was built using "Front-end Microservices". Probably one of the harder development platforms I've ever run into.

It takes a lot of diligence and uncommon knowledge to do it successfully. For instance, when I started on this project, each "microservice" was deployed individually, meaning if you went from /account to /product you were loading an entirely different app. This gave users an initial page load several times during a typical workflow.

I currently work on a massive monolith which ties our entire backend and frontend together in one app. It's a little clunky but overall it makes it a lot easier for my team and me to ship reliable code.


It's like these companies live in a totally separate universe where none of the normal software engineering principles apply any more. The micro services cargo cult permeates everything.


This front-end micro-services architecture looks a lot like how Amazon renders its pages. Amazon's rendering framework takes in components from various teams, renders their components, assembles it all into a final page and then sends it to the client. In theory, this allows all the the benefits promised by the article: teams can use whatever frameworks they want, they can architect their part of the page in a manner that suits them, etc.

In practice, it's a mess. Just go to Amazon.com and open the page in the DOM inspector, or view-source. There's no consistency between any of the page components. Things are duplicated. Things are coupled to each other in weird ways. It looks like an absolute nightmare to debug.


This is always found in independence versus centralization and factoring (duplication removal). Independence allows, well, independence, but usually leads to something very poorly factored. The best systems find a decent balance, which takes diligence, experience, and constant feedback.


Amazon has invested significantly in tooling that makes debugging this possible.

I would not recommend anyone go down this route unless they are also willing to invest heavily in the tooling support necessary.


So the "remedy" to bigger spaghetti is a bigger debugger?


No, I was answering the statement "it looks like an absolute nightmare to debug".

Amazon's page request and rendering pipeline, I would estimate, is necessary complexity. Due to the large engineering force and the speed at which they want to move, it's imperative that the infrastructure support distributed teams delivering at their own pace while minimizing integration points via shared repos but instead integrate via defined APIs. This is a fallout of that need. Do most companies have that need? Probably not.


It's arguably necessary complexity for Amazon. I would argue that the necessity of the complexity arises from the way that Amazon treats its teams as individual standalone business units, which prompts teams to treat their part of the website as a standalone application which communicates with other parts of the website as if the were third parties. This has advantages and disadvantages, which have been outlined elsewhere.

However, even assuming that this is necessary complexity for Amazon, it does not mean that this is necessary complexity for your organization. In fact, I would argue that this is very likely unnecessary complexity for the vast majority of organizations. I concur with others in this thread who say that this framework is the result of an assumption that splitting everything into tiny components communicating via API is a good idea for user interfaces. The main problem I have with this approach is that it encourages divergence in user experience for different parts of the web site or web application. This is definitely a problem that Amazon has, and it's a factor that should be kept in mind before moving to a UI pattern that's modeled off microservices.


I literally lolled when I read the title of this post.


I appreciate the skepticism in the comments because this is a level of complexity that just is not needed in most projects.

This solution adds value when you have an engineering org large enough to have different teams responsible for different parts of your app that are bundled into a single experience for the user. For example, on Airbnb, one team might own the listing details page itself while another team owns the booking form and flow.


Fully agree. A monolithic architecture works for most cases but not ALL cases. Large engineering orgs are one such usecase where the monolithic architecture starts to become less appropriate.


I gave a quick look at the article so it's probably my fault - can anyone explain to me why this are not just a new iteration of the concept of Portlet (1) which was supposed to totally renovate etc. etc. ...

(1) https://en.wikipedia.org/wiki/Java_Portlet_Specification


I had the pleasant of designing and implementing a portlet application back in 2005 - 2006 circa in Java with JSR164 (I may put the wrong JSR here). Initially, I thought portlet concept was very cool and could be a game changer. After a year or so working with it, my opinion changed: it was so convoluted, needlessly. If the goal is to shift the rendering responsibility to the portlet service owner then you could have used client side rendering with AJaX as well (remember AJAX?)

In the end, I think what really killed the portlet development is the complication it causes when you have to maintain states among the portlets.


That was my thought as well. This looks like client-side portlets as a pattern without ties to a particular technology framework. I'm not sure that is a great idea as a prescriptive way to design most things, but maybe it is a good conceptual way of understanding some fairly common designs out there (eg, Amazon.com).


Server-side Includes or Edge-side includes, which would include either statically generated (but routinely updated) or dynamically generated (but cached for a short while) assets work just as well if not better, but they're "old" technologies, I guess?


> The best solution I’ve seen is the Single-SPA “meta framework” to combine multiple frameworks on the same page without refreshing the page (see this demo that combines React, Vue, Angular 1, Angular 2, etc).

I have very little front-end experience (most of the web dev work I've done was hand-written javascript). But having the overhead of a big, comprehensive framework such as React, Angualar already seems like an overkill to me. But having several such frameworks on the same page seems outrageous, it cannot be considered good engineering. Where are the engineering good practices of efficiency, craftsmanship etc?


I'm even thinking about the human issues here. If you've got different teams using different technologies, over time you're going to end up with a tangled, poorly standardized codebase where the same problem is solved 6 different ways in 6 different places, and people are afraid to make changes because it's hard to predict their impact.

I have had the pleasure of working at a microservices shop that had everything working very, very well. The system was a pleasure to work on, development was easy, the operations team was more in control than anywhere else. But I think they accomplished it by unilaterally banning a lot of the things that people think microservices will let them do. Only two programming languages were allowed (a low-level one for the most performance-critical stuff, and a high-level one for the rest), only one flavor of database was allowed, communication protocol changes and any new 3rd party libraries had to be approved by a Star Chamber tribunal that basically said no to everything, lines of communication were strictly controlled (viz., none of this "everyone talks to everyone else through Kafka"), etc. etc.

It was glorious. It really was. Easiest codebase to work in ever. Did I get to use my favorite languages or libraries? Nope. Was that holding the company back? Double nope.


The issue also becomes scope pollution. When you load multiple frameworks on the same page, you need something like iframes, or shadow dom to get isolation.

Typically modern JS dependencies are pretty small compressed and gzipped. Especially if there is some mechanism for deduping the dependencies across (ui-microservices).

I think when web-components/shadow dom is finally ready we will see this type of pattern a lot more. Right now because of scope conflicts this becomes a little difficult.

It's like we want the isolation of iframes, where the content lives within the same document flow as the rest of the document.

I work at an enterprise where we have several different applications. But we have a universal header and footer component. Right now it is a nasty mashup of jquery and other js libraries that just get's pasted into everyone's app without any isolation. It would be great if we could have a shadow dom node that isolates something like a small vue or react app from the rest of the application.

Right now I think the only way for this to work is to coallesce around a single framework.

When it comes to download size of the frameworks, if we had scope isolation as in with shadow dom/web components, then we could actually utilize CDN's more frequently. Think a CDN for web components, where chances are that slider element you add to your page has already been downloaded. To avoid downloading the wrong file, we have subresource integrity attributes in HTML now too https://developer.mozilla.org/en-US/docs/Web/Security/Subres....

So if people started fetching resources from a cdn by default, it's possible 90% of resources have already been downloaded by the client before they even reach the site.


You would never start out building a website with microservices. They have all the downsides people have already commented on.

But are people taking into consideration team/employee size?

If you have 100+ engineers working on a site, in different countries with different managers and different leaders with different product goals. Then the microservice architecture becomes a necessary evil.


In my experience I’ve seen good productivity out of a monolithic architecture up to 50 developers. Have others seen the monolithic architecture scale higher?


Google?


They have a monorepo - not monolithic app.


I think the expectation is that the app is relatively monolithic, as well. If you can force everyone into a single repository, why not force them to a single architecture to push data in?


If the current gmail is any indicator, have they really succeeded?

Otherwise, I'm guessing they don't actually have that many pages with everyone working on them.


What is the benefits of doing this? Micro-service evangelists never care to explain that.

They link to another blog article that say:

> The monolithic approach doesn’t work for larger web apps

> Having a monolithic approach to a large front-end app becomes unwieldly.

But in my opinion breaking up the frontend (or backend for that matter) they way that they say you should is much more "unwieldly".

Seriously, is there any benefits at all? Job security?


This is purely anecdotal, but I feel like developers who have worked on monoliths with terrible teams tend to fall in love with the microservices approach because as long as the endpoint is stable, it doesn't matter what's going on underneath the hood. On the other hand, developers who have worked on microservices with good teams tend to think that microservices are too complex and unwieldy.


I work in finance and we have a similar setup. We have a platform type of app and many different teams working on different products. Each team is responsible for its own part, from JS web components to the Java backend micro services. There are of course teams responsible for building the portal out of the many components, orchestrating the backend of microservices, etc. It does allow teams to work and release independently.


At some point in the development of a large app you run into irreducible complexity. The logic and moving parts are just necessarily complicated and nobody is going to fully understand the system. It doesn’t matter what architecture you use, hard is hard. I think people tend to forget this and build a house of cards trying in vain to correct it.


Okay, but there are a lot of ways to split up & scale projects and teams besides microservices. What's needed is a guide for scaling that considers all the common options and their tradeoffs, and how to measure those tradeoffs against your own shop.


The hardest part is, again, client-side assets fetching/caching. The architecture seems scalable for large projects. It's hard but will be easier if frontend frameworks go to this direction.


[flagged]


get a grip


All these trends are going to give me a seizure by the end of the decade.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: