Hacker News new | past | comments | ask | show | jobs | submit login

I've always felt this problem from the first time I touched Angular. It was just so much more complex and fragile without actually a lot of benefit unless you wanted to make a really interactive async application like Google Docs or Facebook Chat.

When SPA's became the norm and even static web pages needed to be build with React, developing became more and more inefficient. I saw whole teams struggling to build simple applications and wasting months of time, while these used to be developed within a couple of weeks with just 1 or 2 developers, on proven server side opinioated frameworks. But that was no longer according to best practices and industry standards. Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.

I am really happy that common sense is starting to return and more developers are starting to realize that integrated end to end frameworks are very useful for a lot of real life application development scenario's.




When SPA's became the norm and even static web pages needed to be build with React

I'm in a weird situation where I'm contracting into one organisation and they've contracted me out to another. The first organisation know me as a senior dev/architect with 15 years experience in a niche domain. The second organisation see me as brand new to them and despite paying an embarrassing day rate are giving me noddy UI tweaks to do. Extracting myself is proving to be slow somehow.

Anyway, they wanted a webapp with a couple of APIs and nothing on the page than a button, the authenticated username and a line of text. Clicking the button opts you in/out of a service and the text changes depending on the state. The sort of thing people go to once, maybe twice.

I used a mustache template on the server side to populate the values and I didn't even bother with any javascript, just did an old school form submission to the API when the button was clicked and a redirect back to the page.

It was tiny but, obviously it was decided "we should be using a more modern framework" - code for React. It was the more word that got to me, as if there was an equivalent, dated framework I'd used. I didn't put up a fight partly because I was new to the team and figured they were hot on React and I wasn't. Somehow, they made a complete hash of it, they couldn't even figure out how to get all their inline styles (the only styles they used) working without help.

I guess it's just those classics; people want to learn the hot new thing as they see it, their managers are happy that they've heard a buzzword they recognise and then everything becomes a nail for their new hammer.


It's interesting to contrast this kind of organizational behavior with the type where "management won't give us time to deal with technical debt". Though arguably using an over-complicated framework is creating more technical debt, on a certain perspective, this is on the other end of this scale.

It seems to me what we want is some kind of "Platonic ideal" where the extremes are bad for development:

  Management won't give us time to deal with technical debt/incorporate best practices.
  |
  |
  THE IDEAL
  |
  |
  Management wants the Hot New Thing all the time. uservices are the state-of-the-art in best practices so uservices it is!
The best advice (IMO) in dealing with the top end of this spectrum is to frame technical debt and best practices in terms of whatever economic metric the manager cares about (e.g., "TDD will lead to less bugs, ergo happier customers, ergo greater retention"). But I wonder if the same framing can be used to encourage temperance in managers who dwell on the bottom of my spectrum.

I somehow suspect it won't work. The problem with those at the top is that they see the dev's proposition as an investment that will either not bear fruit or, worse, slow the team down; so they are incentivized to keep status quo. Those at the bottom, however, start from a perspective that their idea will add value to the team so they keep pushing for it no matter what. And telling them "Let's not do what Google does; we're not Google" is definitely seen as a devaluation of the team.

This has been a weird headspace to explore. I'd love to hear from others' experience on dealing with this.


IMHO the problem is that with reusing your scale I typically see:

  Management that don't really understand tech and is afraid to try things
  |
  |
  THE IDEAL
  |
  |
  Management that don't really understand business (and sometimes tech too) and is focused on tech fashion
Surprise the ideal is hard because it requires both tech and business experience to recognize how best practices and new tech could bring value in a specific context. The job is to make clients happy by solving their problems, with apps that are nice to use, bug free, performant, maintainable, evolvable, with a usually short time to market and obviously for the best cost. Sometimes that equation is solved with a complex stack, architecture and practices with hundreds of engineers, and sometimes with a few web pages, inlined CSS, a bit of vanilla JS, and a solo dev.


Every time that sort of thing has happened to me it's been because there's some grand plan to build out more features that the people on the frontline don't know about. The plan rarely materializes but the idea that the foundation should be built in a way that supports it isn't completely stupid.


It’s not stupid, no, but a “supporting foundation” is a largely just a seductive metaphor. It says, “Clearly software is like a building. Every building needs a solid foundation.” It doesn’t inspire engagement with other metaphors, like considering software to be a tree that must be grown incrementally and as a product of dynamic forces. It doesn’t map knowledge from the building domain to knowledge in the software domain.


Or that, with software, you can always rip out the foundation and replace it. And you're working on it as you work on the rest of the "building" anyway.

The difficulty of working on lower abstraction layers doesn't scale with the amount of higher layers. Unlike with buildings or bridges, there's no gravity in software, no loads and stresses that need to be collected and routed through foundations and into the ground, or balanced out at the core. In software, you can just redo the foundation, and usually it only affects things immediately connected to it.

A set of analogies for software that are better than civil engineering:

- Assembling puzzles.

- Painting.

- Working on a car that's been cut in half through the middle along its symmetry plane.

- Working on buildings and bridges as a Matrix Lord who lives in the 4th dimension.

All these examples share a crucial characteristic also shared by software: your view into it and construction work is being done in a dimension orthogonal to the dimension along which the artifact does its work. You can see and access (and modify, and replace) any part of it at any time.


The real "foundations" of a software system are probably its data structures rather than the infrastructure/backend. It's still an iffy metaphor though for the reasons you've given.


I love this insight, I just recently learned about the hidden HN feature to favorite comments and used it for the first time to favorite your comment. It's always a pleasure to read your comments on HN, I noticed your handle popping up here and there and would like to thank you for your contributions. If you had a collection of all your comments on HN printed in a book I think I would buy it:)


Biggest problem with templating libraries like mustache is they aren’t context aware, so it is up to the programmer to remember the proper way to escape based on where a variable is used.


Honestly when they come with decisions like that, I'd like them to spend some time on a formal writeup - have them prove they understand the problem, the existing solution, the issues with it, and why React or technology X would solve it. Have them explain why they / their employer should spend thousands on changing it.

I mean it'll only take them an hour or two to write it up, better to spend that than the 20 hours it would take you (for example) to spin up a new stack.


> developing became more and more inefficient

Anecdotally I find the opposite to be true. I've been writing frontend code for over a decade, but I've never moved faster and wrote less buggy code than now. Is that because I've become a better developer? Sure, a little bit. But by and large, I don't believe that ultimately is the reason. I think it's the maturity in the technology. My growth as a programmer is hardly linear and the past 5 years have not matched the growth I achieved in my first 5. Frontend tooling has never been better than it is today.

What I believe, is that the bar to build web applications has been lowered, and there are more programmers than today than ever before. You have people who are not experts in frontend development and javascript trying to build complex UIs and applications. So you take this person who doesn't have the requisite experience and put them to work on a paradigm with a lot of depth (frontend) using frameworks that are really simple and easy to get started with, but compound problems as they are misused.

Another factor is that since SPAs are stateful, complexity mounts aggressively. Instead of a refresh on a stateless page every few seconds, one page causes bugs that rear their head for the duration of the session. These inexperienced people are put in charge of designing codebases that don't scale and become spaghetti. But when designed properly, these problems are largely negated.

I'm not advocating that SPAs are the solution to all problems. I think there's gross overuse of SPAs across the industry, but that is not an indictment of SPAs themselves. That is someone choosing the wrong technology to solve the active problem.

With respect to angular (1, I never touched 2) specifically, I always found it extremely overengineered, poorly designed, with terrible APIs. But that's a problem with that specific framework and says nothing at all about SPAs at all.


> Frontend tooling has never been better than it is today.

What's the library or design pattern to consume a REST API in React or any of the mainstream front-end frameworks? The only thing I'm aware of is Ember Data but Ember is apparently not cool anymore, and I couldn't find a suitable replacement.

I'm asking because in all the projects I've been involved with, consuming the backend API always felt like a mess with lots of reinventing the wheel (poorly) and duplication of code. I can't believe in 2020 there's not some kind of library I can call that will give me my backend resources as JSON and transparently handle all the caching, pagination, error handling (translate error responses to exceptions), etc and people have to do all this by hand when calling something like Axios.

In contrast, Django REST Framework handles all that boilerplate for me and allows me to jump right into writing the business logic. It's insane that ~30 lines of code with DRF (https://www.django-rest-framework.org/#example) gives me a way to expose RESTful endpoints for a database model to the web with authentication, pagination, validation, filtering, etc in a reusable way (these are just Python classes after all) but the modern front-end doesn't have the client equivalent of this.


> I'm asking because in all the projects I've been involved with, consuming the backend API always felt like a mess with lots of reinventing the wheel (poorly) and duplication of code. I can't believe in 2020 there's not some kind of library I can call that will give me my backend resources as JSON and transparently handle all the caching, pagination, error handling (translate error responses to exceptions), etc and people have to do all this by hand when calling something like Axios.

If you look at 20 REST API's you'll probably see 30 different patterns for pagination, search/sort, error responses, etc. There have been a couple attempts to standardize REST such as OData but I think it's safe to say that they haven't been very successful. It's kind of challenging to build standard reusable front end tools when everyone builds back ends differently.


Ember has somewhat solved that problem though.

You have the concept of data adapters which would be clients for your API (you can make a custom one if extending the existing ones isn't an option) and the rest of the application just interacts with the equivalent of database models without ever having to worry about fetching the data. You could swap the data adapter without having to change the rest of the code.

We seem to have lost this with the move to React though, and even the hodgepodge of libraries doesn't provide a comparable replacement.


Haven’t used it but aren’t there things that can connect to a swagger API spec and do some of the heavy lifting for you? I agree that the network layer in frontend is tedious to implement, things like GraphQL and Apollo attempt raise the abstraction level. What I would really like to see is something even more abstracted, e.g a wrapper around indexdb you can write to that syncs periodically over websockets to your server, more similar to the patterns we use on mobile.


It seems that you are describing pouchdb: https://pouchdb.com/


You're right Pouch completely slipped my mind, it's a great solution. But what about something more generic on the backend that wasn't database specfic, some sync engine you could put in front of whatever database you wanted. Can you do something like this with Pouch?


> What's the library or design pattern to consume a REST API in React or any of the mainstream front-end frameworks?

For React, its out of scope; anything that you can use in JS for this can be used. If you are using a state management library, that's probably more relevant to your selection here than React is.

REST is also too open-ended for a complete low-friction solution, but, e.g., if its swagger/openapi, there's tools that will do almost the entire thing for you with little more than the spec.

> The only thing I'm aware of is Ember Data but Ember is apparently not cool anymore, and I couldn't find a suitable replacement.

Ember Data is definitely a valid choice. It may not be hyped right now, but that has little to do with utility or use in the real world.


The graphql frameworks - like Apollo - give you that. I haven't used it. For basic caching the state management frameworks work pretty well, but it is a lot of layers when you add Redux or Vuex to your stack. It works well for us though and I find it much easier to reason about than the old jquery spaghetti code style.


I hear you. I find myself needing to reinvent the wheel far too often to traverse the boundary between the client and server. I also feel that it shouldn’t be this hard. Apollo client and relay solve this problem for GraphQL APIs (quite nicely IMO). What’s missing is an Apollo client for non-GraphQL APIs.


For what it's worth GRPC-Web is a pretty nice solution here.

My team generates backend stubs from our GRPC spec which allows us to jump right to implementing our business logic.

Frontend projects make use of the GRPC-Web client codegen to make calling the API simple and type safe (we use typescript).

We mostly use all the official GRPC tooling for this. We write backends in golang and dotnet core so GRPC-Web is supported quite well out of the box.

I wrote a slightly modified Typescript codegenerator to make client code simpler as well: https://github.com/Place1/protoc-gen-grpc-ts-web


Yeah, after experiencing type safe APIs + editor integration with TypeScript I don’t think I can go back.

There are, of course, other solutions besides GRPC.


React Query.


React-query.


> These inexperienced people are put in charge of designing codebases that don't scale and become spaghetti.

I think this is one area where front end tooling can be painful for the average dev. The bar to writing idiomatic JS for a given framework can get pretty high quickly, especially when you look at some of the really popular tools out there (i.e. redux).

Front end work has become so much harder to grok because the patterns around things like state management still have a lot of warts. The terminology of redux drives me crazy because it’s really difficult to explain things like reducers.


What most people have in mind as "idiomatic JS" isn't that. It's usually meant to refer to some patterns that appeared and started getting popular around 8 years ago. And often, code written in this not-idiomatic way works _against_ the language and/or the underpinnings of the Web in general. It's just that the circles promoting the pseudo-idioms have outsized and seemingly inescapable influence.


This is very vague. Can you give some examples?


The question asking for clarification is itself vague. Examples of which part?

Look at JS that's written for serious applications today, identify the stuff that you'd label as "idiomatic", and then look at code that was written 10 years ago for serious applications, and see if it matches what your conception of "idiomatic JS" is. Good references for the way JS was written for high-quality applications without the negative influence of the new idioms (because they didn't exist yet): the JS implementing Firefox and the JS implementing the Safari Web Inspector.

Examples of how "idiomatic JS" is often written by people who are working against the language instead of with it:

- insistence on overusing triple equals despite the problems that come with it

- similarly, the lengths people go to to treat null and undefined as if they're synonymous

- config parameter hacks and hacks to approximate multiple return values

- `require`, NodeJS modules, and every bundler (a la webpack) written, ever

- `let self = this` and all the effort people go through not to understand `this` in general (and on that note, not strictly pure JS, but notice how often the `self` hack is used for DOM event handlers because people refuse to understand the DOM EventListener interface)

- every time people end up with bloated GC graphs with thousands of unique objects, because they're creating bespoke methods tightly coupled via closure to the objects that they're meant for because lol what are prototypes

These "idioms" essentially all follow the same "maturation" period: 1. A problem is encountered by someone who doesn't have a solid foundation 2. They cross-check notes with other people in the same boat, and the problem is deemed to have occurred because of a problem in the language 3. A pattern is adopted that "solves" this "problem" 4. Now you have N problems

People think of this stuff as "idiomatic JS" because pretty much any package that ends up on NPM is written this way, since they're all created by people who were at the time trying to write code like someone else who was trying to write code like the NodeJS influencers who are considered heroes within that particular cultural bubble, so it ends up being monkey-see-monkey-do almost all the way down.


Hi, I'm also a new JS coder, but I'd like to avoid becoming one of "those people" you're talking about. I've been struggling with exactly what you mention - how to find out the "correct" way to apply patterns/do relatively complex things, but all I get on search results are Medium articles written by bootcamp grads.

Can you recommend any sources of truth/books that can guide down the right path? Of course I'll be going through all the things you mention but I'm just curious if there's somewhere I can get the right information besides just reading through Firefox code, for example.

Thanks!!!


I'd say Eloquent Javascript (available for free online, I think) is a good book to read. "You Don't Know JS" is also a good one!

Basically, go for anything that teaches you non-js-specific approaches as well as as a solid understanding of the fundamentals.


Thank you! I'll check both of those out


And Crockford’s JavaScript: The Good Parts. Although and older book, JavaScript fundamentals never change and it describes a lot of those forgotten foundations.


A good starting point to explaining reducers is that you are reducing two things into one.

A redux reducer: The action and the current state reduce into the new state. And of course it doesn't matter how many reducers or combined reducers your state uses - they're all ultimately just doing this.

This also works for Array.prototype.reduce(). You're reducing two things into one.


The concept of a reducer isn’t the hard part of Redux... it’s designing your state, organizing actions/reducers/selectors, reducing (no pun intended...) boilerplate, dealing with side effects and asynchrony, etc.


Redux is not idiomatic JavaScript though. It’s trying to make JavaScript immutable and have adt which it doesn’t. If you use elm, reasonml, rescript, etc this pattern is a lot easier to implement than with JavaScript.


I wasn't readily familiar with the acronym ADT. It's "Algebraic Data Types" for anyone else in the same boat.

https://en.wikipedia.org/wiki/Algebraic_data_type


In CS it also refers to Abstract Data Types


Ah, good call. Thank you! Here's a reference I found that explains the differences between those two concepts, as applied to Scala:

https://stackoverflow.com/questions/42833270/what-is-the-dif...


Yup, it’s beautiful in Elm and makes no sense in JavaScript, which doesn’t need it.


I definitely agree you, F.E Tools has gotten a lot more mature and we have a lot further to go as well.

I'm primarily a backend developer and I think in general backend developers makes for "poor fronted devs". I'm talking about those "occasional" times the backend-dev needs to do some f.e dev work. Just because they don't know the tech as well, best practises and spend as much time with it as a dedicated F.E Dev. jQuery code written by the "occasional front-end dev" is kinda horrific in many cases.

Now please internet hear me. I'm not saying you can't write bad code in a JS-Framework. I'm saying it's usually less often and less bad - especially for non-dedicated f.e devs

Like crossing a street, just looking left and right won't guarantee you to be safe in your crossing, but it damn near makes it less probable.

If you are a shop with mostly backend-devs and don't want to invest in a F.E dev, you definitely should look into a js-framework.

*Svelte is always good start very small and bare bones.


> If you are a shop with mostly backend-devs and don't want to invest in a F.E dev, you definitely should look into a js-framework.

That matches in my experience.

I worked in a shop with only backend developers and the frontend was an absolute buggy mess of jQuery on top of bootstrap. After migrating most of it to Vue I taught it to the team and all the experienced-but-frontend-shy developers started producing great frontend code by themselves.


> Svelte is always good start very small and bare bones

I second this

The barrier to entry is lower than for React, and the results are great


> Frontend tooling has never been better than it is today.

eh. Swing in its golden age run circle around what we have now. Granted it's old tech now that we settled for in-browser delivery, but still:

- look and feels could do theming you can only dream of with css variables/scss

- serialize and restore gui states partially or whole, including listeners

- value binding systems vue can only dream of

- native scrollers everywhere you could style and listen to without the rendered complaining about passive handlers. - layout that didn't threw a fit about forced reflows

- unhappy with the standard layouts? build your own one as needed

- debug everything, including the actual render call that put an image where it is

- works the same in all supported os

browsers are an insufferable environment to work within compared to that, css is powerful and all but you get a system you can only work by inference, and were everything interferes with everything else by default, which works great to style a document and is a tragedy in an app with many reusable nested components.


Not to be mean, but I worked with Swing for ten years and it was absolute crap. Constantly dealing with resizing “collapsars”, GBLs, poor visual compatibility with the host OS, a fragile threading model and piles and piles of unfixed bugs and glitches was a nightmare. It might have worked if you had a specific end user environment but it was a PITA for anything else, and deployment was even harder.

There are a few things I definitely miss from my 20 years as a Java dev, but the half assed and under funded Swing UI is not among them.

Give me HTML+CSS+JS any time.


well we're talking about the tooling, but point taken, swing wasn't perfect (totally grid bag 2007 short https://www.youtube.com/watch?v=UuLaxbFKAcc )

but it's not like it was that worse than flex bugs ( https://codepen.io/sandrosc/pen/YWQAQO still does render differently in chrome than firefox - which one is wrong is immaterial)


Give me HTML+CSS+JS any time.

Sure! And Chrome is superbly tested.

But HTML/CSS/JS aren't anywhere near good enough to build GUIs of any complexity by themselves, so everyone layers tons of stuff on top. And then those ... those, people have plenty of complaints about too. But if they didn't use them those complaints would migrate to the underlying framework.

I mean, Swing may have had a fragile threading model (not sure what you mean by that really), but HTML doesn't have one at all. Not great!


I agree with you 100%, and was really just addressing the “swing is awesome” statement in the GP. I’ll take HTML etc over Swing any day, but I’m sure there are nicer alternatives if your deployment environment is native, e.g. SwiftUI (which I have no experience with)

There are plenty of things wrong with HTML & friends, await/async and webpack being my personal hair removers, but if we set that aside and just talk about the DOM as the API for the UI, it’s very robust, well documented and widely available. I don’t love it, but it works.


"swing tooling" thank you, don't misrepresent my argument.


HTML was never meant for building apps. We took a square peg (document markup language) and jammed it into a round hole (app development.) Most of the problems and frustrations with web development go back to this.

We've been using the wrong tool for the job for over 2 decades. Now it's everywhere and nobody knows any better. It's probably too late now.


>What I believe, is that the bar to build web applications has been lowered,

Yes, the bar to build web applications has been lowered. We can all build something on the level of GMail now.

The ability to build websites has been crippled, because you are often forced to build the sites using the tools suited to applications. As you and both the parent comment seem to agree on.


Yeah, there is some overuse of SPAs I agree, but havent anyone in this thread worked in older java monoliths with JSP or even good old Struts framework?? THEN you can see what inefficient development looks like.


> Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.

This is a major problem with our industry. Unfortunately, the people with the power to curb this trend have their paycheck depend on it continuing.

As a company, you are incentivised to have a large tech team to appear credible and raise funding, so you hire a CTO and maybe some engineering managers. Their career in turn benefits from managing large amounts of people and solving complex technical problems (even if self-inflicted), so they’ll hire 10x the amount of engineers the task at hands truly requires, organise them in separate teams and build an engineering playground that guarantees their employment and gives them talking points (for conferences or the seemingly-mandatory engineering blog or in interviews for their next role) about how they solve complex problems (self-inflicted, as a side-effect of an extremely complex stack with lots of moving parts). Developers themselves need to constantly keep up to date, so they won’t usually push back on having to use the latest frontend framework, and even if they do, that decision is out of their hands and they’ll just get replaced or not hired to begin with.

In the end, AWS and the cloud providers are laughing all the way to the bank to collect their (already generous) profits, now even more inflated by having their clients use 10x the amount of compute power that the business problem would normally require.

Maybe the issue is the seemingly-infinite amounts of money being invested into tech companies of dubious value, and the solution would be to get back to Earth as to have some financial pressure coming from up top that incentivises using the simplest solution to the problem at hand?


This is the main reason I refuse to entertain going perm in the tech sector. The amount of superfluous infrastructure and unquestioned use of SPA's is just an overwhelming time sink. I would honestly rather work with some 2-bit company's legacy PHP than this mountain of crap.


For what it's worth, once you are proficient in the full end-to-end, navigating it is pretty easy, IMHO.

It just takes years and lots of room to do basically nothing, and if something meaningfully shifts, you need a while to get back up to speed.

I'm not saying it's efficient, or that you should dive in, but I did want to throw out there that there is a light at the end of the tunnel. People using React.js aren't flailing about in the dark the whole time.


> It just takes years and lots of room to do basically nothing, and if something meaningfully shifts, you need a while to get back up to speed.

If true, that's a damning indictment of the industry and the whole SPA pattern.


Food for thought: the tech sector is much larger than the trendy dumpster fire of web development. You don't have to work at some startup on some website. There is still lots of real programming to be done.


What are the real growth areas? I'd welcome an exit from web development but as a freelancer it seems to be all there is.


Kubernetes is the biggest joke. I remember working with a sysadmin who worked for The Guardian provisioning servers remotely as demand spiked. This is pre-AWS. He used Puppet and remarked that you would only ever need what he was using for managing massive fleets of servers. Then Kubernetes and Docker arrived, which were intended for even bigger deployments in data centres. Before you knew it, just as with SPA's, Kubernetes and Docker became the new requirements for web devs working on simple apps.


Also never underestimate the power of a single bare-metal server. Today everyone seems to be in the clouds (pun intended) and has seemingly accepted the performance of terrible, underprovisioned VMs as the new normal.


Stackoverflow -- the website that every developer uses probably all the time -- is an example of a site running on a very small number of machines efficiently.

I'd rather have their architecture than 100's of VMs.


It’s remarkably efficient and simple: https://stackexchange.com/performance

For those who are discouraged by the massive complexity of Kubernetes/Terraform and various daunting system design examples of big sites, remember you can scale to a ridiculous levels (barring video or heavy processing apps) with just vertical scaling.

Before you need fancy Instagram scale frameworks, you’ll have other things to worry about like appearing in front of congress for a testimony :-)


This is indeed the standard example I refer to to prove my point, and all my personal projects follow this model whenever possible. The huge advantage in addition to performance is that the entire stack is simple enough to fit in your mind, unlike Kubernetes and its infinite amount of moving parts and failure modes.


Wow. Stack Exchange is a curious case study.

I share the general HN sentiment over microservices complexity but just to play devil's advocate...

I suspect that server cost in this case is asymptotic. If the (monetary) cost of SE's architecture is F(n) and your typical K8s cluster is G(n), where n is number of users or requests per second, F(n) < G(n) only for very large values of n. As in very large.

In essence, the devil's advocate point I'm making is that maybe development converges towards microservices because cloud providers make this option cheaper than traditional servers. We would gladly stay with our monoliths otherwise.

I tried to contrive a usage scenario to illustrate this but you know the problem with hypotheticals. And without even a concrete problem domain to theorize on, I can't even ballpark estimate compute requirements. Would love to see someone else's analysis, if anyone can come up with one.


Microservices will add latency because network calls are much slower than in-process calls.

Microservices, as an architectural choice, are most properly chosen to manage complexity - product and organizational - almost by brute force, since you really have to work to violate abstraction boundaries when you only have some kind of RPC to work with. To the degree that they can improve performance, it's by removing confounding factors; one service won't slow down another by competing for limited CPU or database bandwidth if they've got their own stack. If you're paying attention, you'll notice that this is going to cost more, not less, because you're allocating excess capacity to prevent noisy neighbour effects.

Breaking up a monolith into parts which can scale independently can be done in a way that doesn't require a microservice architecture. For example, use some kind of sharding for the data layer (I'm a fan of Vitess), and two scaling groups, one for processing external API requests (your web server layer), and another for asynchronous background job processing (whether it's a job queue or workers pulling from a message queue or possibly both, depends on the type of app), with dynamic allocation of compute when load increases - this is something where k8s autoscale possibly combined with cluster autoscaling shines. This kind of split doesn't do much for product complexity, or giving different teams the ability to release parts of the product on their own schedule, use heterogeneous technology or have the flexibility to choose their own tech stack for their corner of the big picture, etc.


Not to mention, you need an infra team to manage all this complexity - much larger team than maintaining a few vertically scaled servers.

A salary of 3 infra engineers per year $300k, cost to company probably $450k.

For $450k a year, you can get about 500 servers, each one with 128 GB RAM and 32 vCPUs.

Has anyone done this type of a ROI?


I'm not sure if we're on the same page here. When I said "cloud providers make this option cheaper than traditional servers" I meant it as in the pricing structure/plans of cloud providers. That's why I tried to contrive a scenario to make a better point. Meanwhile your definition of cost seems to center on performance and org overheads a team might incur.

You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you. Something you already pay your provider for. So my DA point now is, is it cheaper to pay them to handle this or is it cheaper to shell out your own and manage manually?


> You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you

I actually wasn't talking about serverless at any point - I understand that term to mostly mean FaaS and don't map it to things like k8s without extra stuff on top, which is closer to where I'd position microservices - a service is a combo of data + compute, not a stateless serverless function. But I agree we're not quite talking about the same things. And unfortunately I don't care enough to figure out how to line it up. :)

Org factors rather than cloud compute costs are why you go microservice rather than monolith was my main point, I think.


I can't recall reading much on how going for 'the cloud' or 'serverless' saved anyone money. On the other hand, I've read my fair share of horror stories about how costs ballooned and going for the old-fashioned server/VPS ended up being much, much cheaper.

The main argument in favor of the 'cloud' is that it's easier to manage (and even that is often questioned).



I haven't looked for a while but Plenty Of Fish (POF) also ran on the same infrastructure and the same framework - ASP.Net. Maybe ASP.Net is particularly suited to this approach?


What about interpreted languages? I was taught a Python web server can do $NUMCPUS+1 concurrent requests and therefore 32 1 CPU VM will perform as well as a 32 CPU VM.


You still have the overhead of the OS. In the first case you’re running 32 instances each with their own OS, the latter you have a single OS to run.

Unless high availability is the concern, I’d always recommend a big machine with lots of CPUs than lots of small ones.


Kubernetes is overkill for most applications that's true, but Docker is awesome because it solves almost all of the "it works on my machine, doesn't work in prod" and "it worked yesterday, doesn't work today" issues and isn't that hard to adopt.


Kubernetes has been great for us and is much easier to manage over time than servers. There’s an adoption cliff, but I’d take kube over spinning your servers with puppet any day.

Hell I might even run kube if I was running bare metal. Declarative workloads are amazing.


> I've always felt this problem from the first time I touched Angular. It was just so much more complex and fragile without actually a lot of benefit unless you wanted to make a really interactive async application like Google Docs or Facebook Chat.

It sounds crazy to say that now but Angular became big because it was actually quite lightweight compared to other JS frameworks of this era, declarative 2 way databinding was cool, it was compatible with jQuery (thus its widget ecosystem) and it was also developed with testing in mind. So it was easy to move jQuery projects to Angular ones, and developers cared about this aspect and it helped organize code quite a bit. Angular 2 on the other hand never made sense and it was a solution looking for problem.

React and JSX came along and allowed developers to use JS classes when a lot of browsers didn't support them. And unidirectional dataflow was all at rage. It was always the right solution of course, but I never heard about DOM diffing before that which to me is the main appeal to React. To this date, HTML API still do not have a native(thus efficient) DOM diffing API which is a shame.

> When SPA's became the norm and even static web pages needed to be build with React, developing became more and more inefficient. I saw whole teams struggling to build simple applications and wasting months of time, while these used to be developed within a couple of weeks with just 1 or 2 developers, on proven server side opinionated frameworks. But that was no longer according to best practices and industry standards. Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.

IMHO the problem isn't React and co or even SPA. In fact writing an REST/Web API should be easier than writing a server-side generated HTML website (no need for templating language, ugly form frameworks,...). The problem is the horrible and complex NodeJS/NPM backed asset compilation pipelines and build tools that these framework often require in a professional setting, which incur a lot of complexity for very little gain.


In fact writing an REST/Web API should be easier than writing a server-side generated HTML website (no need for templating language, ugly form frameworks,...).

Why is that easier? It's more work, you are now rendering two views instead of one, a JSON one(server) and HTML one(in the client) with all the JSON encoding/decoding that it entails. You are still using a templating language and, dealing with forms in React is more cumbersome than doing it server-side.


To be fair, they only said writing an API should be easier than writing a server-rendered HTML form (1:1).


separation of concerns, easier testability, easier mocking..., the thing is especially in more complex applications the code that generates/validates the data and the code that displays them are usually written by two different people.

nowadays once the json schema design is settled, they can work in parallel, each of them can test their parts without needing the other and the merges can be simpler, because the parts do work more or less stand-alone.


> React and JSX came along and allowed developers to use JS classes

Nitpick, but I doubt that was the reason developers are flocking to React back then. In the beginning browsers didn't support JavaScript classes and neither did React. You fake them by using a function known as React.createClass instead. There was also no transpilation required, as JSX was optional. In fact React was always about unidirectional data flow, and reasoning about state -> DOM elements rather than reasoning about changes to the DOM.


> The problem is the horrible and complex NodeJS/NPM backed asset compilation pipelines and build tools

Would like to hear more.


I don't think it's developers tbh. Or rather, it's another set of perverse incentives in the industry.

To get a job, devs need experience in relevant tech. No company is willing to train their devs - they all have to hit the ground running. So devs have to have demonstrable experience in the tech that lots of companies use. Companies need to hire devs, and don't really care what tech is used. But using what everyone else uses makes their hiring easier because it's easier to find devs who want to work on that tech. So they advertise for devs with experience in a hot tech. The devs see this and try and move their internal projects to use the hot tech so that if/when they look for their next job they'll have experirence in it.

The devs are just trying to stay relevant in a rapidly changing tech scene so they can get their next job.

The companies who employ them don't care what tech is used, but find recruiting devs to be easier if they're working in the latest hot tech.

The key point that could change all this is if companies were willing to train their devs in the tech stack they're using.


Everyone looks to the bigger companies for tech trends, not realising that they have none of the problems ultra scale companies are trying to solve.


I also wonder how much of the SPA trend by mega corps was about shifting compute “client side” to save money on infrastructure. It’s kinda like modern data warehouses where storage/compute is now so cheap you do ELT and not ETL anymore. I probably wouldn’t do an SPA today unless I really had to.


The problem that nobody knows whether something will become the next Google Docs. Transitioning to an SPA from something like jQuery is basically a complete rewrite.

To be willing to not use an SPA, you need to be willing to exclude certain options from day 1. Find me a product manager willing to do that.


> nobody knows whether something will become the next Google Docs

How many times has it actually happened that some scrappy startup has 1) became the next big thing and 2) not being at the edge of over-engineering actually killed it or significantly impacted its revenue? This just feels like wishful thinking.

Also keep in mind that even if you were on track to become the next Google Docs, this means your current product is usually good enough as-is and gives you time (and $$$) to improve it.


I'm not sure that using React or another JS framework counts as 'being at the edge of over-engineering'.

I agree with the rest of your point - the value of the product to end-users has little to no correlation with the underlying technology choices, which is a pretty controversial statement, but one that I think is true. A customer doesn't care if you built it in React, in one Perl file, or if you're sacrificing goats to retain the minimum requisite levels of dark magic to keep the system running. If it solves their problem they'll keep giving you money for goats.


It depends on what the objective is. I've seen plenty of project where React was used just to have it as a buzzword, but otherwise provided no functionality and actually slowed development down and ended up being less reliable (we had to - poorly - reimplement behaviors like validation, pagination, etc that our backend framework already had for free).


Another view of this problem, evolving a SPA of CRUD app into Google Docs may also be a complete rewrite.

IMO when the time comes and your product is used well, you may be ready financially and technically to do a complete rewrite. Otherwise maybe the current functioning application is better if the rewrite isn't justified.


React isn't so bad. It's fairly straightforward and the components are contained within the page. And it's more of a library than a framework. The core is small and easy to learn

Angular is a giant confusing pile of magic. It's so complex you've gotta be a core developer to even understand how an app comes together. Stay the hell away if you can




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: