Hacker News new | past | comments | ask | show | jobs | submit login

Depends what you wanted from Microservices. If all you wanted was scale, then Rails ActiveJob solves that very effectively allowing you to scale your job-runners.

If you're looking for the "mindset" of microservices, where you store the data separately and impotently, then I believe Rails discourages that mindset.




I keep hearing this "microservices to allow scale", in which "scale" implies performance, as some counterargument against microservices.

Honest question, who ever seriously proposed microservices to improve performance? It doesn't take a lot of thought to figure out that microservices have overhead that will always put it in a disadvantage over a monolith in this regard.

The only "scale" that makes sense wrt microservices is that of the scale of teams, and manage-ability of the scale of features and codebase. They are primarily a solution to "too many people working in too many poorly bounded domains". But as a solution to performance problems? Is it really proposed as that?


This was seriously proposed by some. E.g. "scaling services independently"

Scaling services independently is usually a recipe for outages where something is at the wrong scale. Sometimes you want to separate workloads that don't fit the request response model well because they take too long or use too much CPU or RAM, but you don't need micro services to get that benefit.

I don't think anyone was claiming it would lower latency for typical requests except maybe indirectly through materializing views in event-driven architecture.

I think the steel man has been about scaling teams, but the discourse was not limited to that.


The idea of reserving some capacity for specific workloads make sense, but that's mostly a load balancer / job scheduler configuration thing. The latent capability to serve other workloads physically hanging out in the same binary is really unlikely to have a material impact, if you're not sending it any work.


It was proposed in the sense that Ruby, or python, or whatever webserver language you used (Perl, php, even JavaScript) was slow, single core, synchronous, database blocked, or whatever else made it “unscalable” and you built this tiny service that only focuses on your core bottlenecks like an api call that only returns coordinates of your map position on things like aws lambda.

Then for some reason some junior engineers thought that you could make everything an api call and you can make services in the most optimal language and glue them all together to have functional “scalable” apps.

And thus the horrors of being a web dev in 2016+ began.

Of course it didn’t help SPAs were encouraging backends to be decoupled from front ends and completely hidden in their implementation so the fact that “it was now possible” enticed backend devs to experiment with multiple api services.


Well, Ruby (on Rails) is slow, single core, synchronous, database blocked and hard to scale. But certainly almost everyone realises that's not a feature of it being a monolith, but comes from it's language/stack/paradigms (AR, template, dynamic, JIT etc)?

I have, certainly, replaced some endpoints in Rails apps with lambda's, rust, or even standalone sinatra services for performance reasons.

For example an endpoint that generated "default stable avatar pngs" for new users: Ruby just isn't cut for image generation and manipulation. Rewriting that in a stack that performed x100 in this use-case (we picked rust) took a lot of heat off the cluster of servers.

Or moving the oauth and registration to a separate rails app that served these pages - the only endpoints that did HTML. Allowing the "main" Rails app to remain leaner by not loading all of the templating, and other HTML middleware in memory when it would never be used.

In that sense, I guess, monolyths can have a performance disadvantage: they require the entire app to load stuff for that one endpoint or feature even if 99% of the requests and users never use that.

Like the "PDF generation for reports" we once had, that was rarely used but still loaded in every running thread that would never handle anything related to reports or PDFs. Extracting that to a separate "PDF report generation worker" freed GBs of memory on almost all servers.


Yes, this is the sensible and necessary side of microservices...

Now, take your auth logic and put it on a 3rd party, rewriting all of your auth to do so.

Now, make your database shared across multiple distribution platforms and 12 services (aws, cloud, heroku, tableau).

When one of your 15 services goes offline for temporary maintenance, for some reason your entire website goes down.

The 17th service you created has an ip address switch and is missing and the response to all urls is the default apache gateway page.

The 24th service upgraded from Node 12 and is now broken, while the 26th service built in Go doesn't compile on the specific linux variant of one of your devs.

Before you know it, you're just doing maintenance work because something is broken and it isn't your code, it's some random downtime or brittleness that is inherent in microservice architecture.


What you describe is common "management of complexity", or, really, lack thereof.

These problems are independent of "microservices" vs "monolith". They are independent of "using a framework" vs "no framework". They are independent of programming-language or hosting infra.

Managing complexity, in itself, is a daunting task. It's hard in a monolith, it's hard in microservices. Building a tangled big ball of spaghetti is rather common in e.g. Rails - it takes a lot of experience, discipline and dedication to avoid it.

Languages (type systems, checkers, primitives), frameworks, hosting infra, design patterns, architectures, all of these are tools to help manage the complexity. But it still starts with a dedication to manage it today, and still be able to do so in a decade.

Microservices don't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "services". Just as a monolith doesn't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "modules".


Image manipulation is the one thing I also run as a micro service whenever needed. I just set up imagor once and never need to manage that in the installation/images of all other apps. No extra memory for shelling out to libvips or imagemagick needed.

The PDF use case also sounds very promising low hanging fruit


> very promising low hanging fruit

That was actually excacly our angle of attack: see the routes, modules or jobs that were putting the most pressure on the servers.

Then copy the entire app over to separate servers, connected to the same db cluster. Have an app router direct everything except, say /reports/ to the old servers, and /reports/ itself to the copies on new servers.

Did the old servers reduce significant in load? Rip out the part there. Better? Now rewrite, cleanup, isolate or extract the, e.g. /reports/ part on the servers. Better?

Then, finally, disconnect the service from the shared DB (microservices sharing a DB is the worst idea ever.) and have it communicate either via a message bus, via REST calls or not communicate at all.


Scale in a sense where you can scale that one part of system independently when it is micro service.

You still can run into situation where adding a network call is small overhead over the optimization available where it has its own datababase running on its own VM where you can add more resources just for that specific thing.

Maybe you can rewrite that part in a language that fits use case better for only that service.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: