Unpopular opinion: like most frameworks, things are moving far too fast to be beneficial. I've been in the development community since the mid-late 80s, and I've been a web developer since the early 2000s, yet I still cannot keep up with the latest and greatest from the technologies I work with, which change both one job to the next, as well as one project to the next in many cases.
Angular lost me going from 1-2, for example. Not because Angular 2 was bad, but because they decided they needed to do a fresh start. Vue2 -> Vue3 is almost the same story, though I stuck with that one. Rails? Oh boy...PHP? That is a damn nightmare. Tell me without looking at the docs what the best practices should be for even the most basic stuff.
If I have to rewrite large portions of my code for a new revision, even a major one, you are most certainly doing it wrong. If you want to rewrite the project, at least care to add a compatibility layer (which some projects have done, but most have not).
The really frustrating part to me, is that every year the frontend claims they have finally slowed down and that things are stable.
I just did the "create-next-app" and, I can't claim that all of that is not eventually needed in some form or the other; however, it is mind blowing that I think I know how to hello world a video game with less crap here.
That point is crazy to me. Yes, if you are making a multi platform game, you will need a ton of boilerplate. Will far outstrip the boilerplate that this react thing gave me. Same for if you have a large team building a web based application.
However, for dipping your toes in to the frontend part of a web page, it is boggling to me that they also push so much on you that are not directly front end concerns. Worse, there is a ton here to help with "fast" web pages. Problem is, most of the slow pages out there are slow because the teams just can't keep up with all of these shifts in how you are supposed to do things. Every page that is actually fast, simply stayed with how to do things years ago.
>Problem is, most of the slow pages out there are slow because the teams just can't keep up with all of these shifts
Not really. Most pages are slow not because they're using outdated techniques, or they're still serving everything from a large Windows Server running an ancient version of .NET. Those old react pages aren't slow either because they "didn't keep up", they still run fine.
Pages are slow because of the obscene amounts of tracking and "user conversion" techniques. Banners, notices, trackers to make note of everything you do on the page, trackers to make note of errors, third party trackers, ads everywhere because a simple static one isn't good enough, list goes on.
Guess what those old pages that "stayed fast" don't have? You guessed it, all those BS trackers, they weren't readily available back then.
I didn't mean that old react pages are slow. I meant sites that started on old react and kept trying to keep up with all of the newest best practices throughout the years are slow. Typically because they do too much on the webapp and don't treat it as a gui only. The last web app I looked at in the old job was literally as much code as the entire backing service for it. I don't know how to justify that.
And yeah, for public sites, tracking almost certainly dominates how slow things are. I was thinking of internal tooling sites. I remember how fast many of the tools were when I started at the last job. I also remember how slow all of them were getting as I was leaving said job. It is more than a touch insane.
I can't say I've come across many internal tooling where the experience is anywhere near as bad as public sites. I find that even the cheap hardware offerings from businesses tend to handle bloated internal tooling pages fine, even if your first load might take a good minute. Ignoring tooling specifically crafted for IE6 and hasn't been touched. May that rot in hell.
There was this really horrific one I did experience, come to find out the entire company is relying on this angular app that connects to an ancient mac mini that IT is responsible for but mostly forgot about. Wasn't a large company, but they weren't stalling in growth!
Certainly many sites with tons of tracking are worse than most internal sites. You will not get a disagreement from me on that.
I can't really cite examples, as I'm not at the old job. But silly things like our "news" homepage was getting so that it lazy loaded all of the news. HR things would load in parts, too. And issue tracking seemed to constantly be a mess.
The tool my team controlled had bloated to excessive points. Some of that was almost certainly my fault. I had designed a somewhat granular backend. That said, I remember things were certainly faster before we had a ton of redux based code to do things.
Sorta? The backend specifically exists to be where state is persisted. And the validation of the data has to happen at the backend, even if you also repeat it at the front end.
You are correct that there can be more interactions on the frontend, but frontends were both faster and easier to deal with when they did not do all of this. For many internal tools, bouncing back the validation from the backend is also easier to understand than the validation that the website is providing.
I suspect it is a bit of a curve. Zero interaction on the front is not pleasant. All of the state replicated on the front is also not useful and likely to be more problematic.
That's how server-side interaction was in the early 00's with ASP.Net WebForms. This type of interaction is one of the things being proposed as opposed to client-side. And yeah, it definitely was a (very painful) thing. Server-driven Blazor has very similar (poor) behavior today. I know there's other frameworks that do similar things. Which really sucks if you have any latency or bandwidth issues.
Not that I like the idea of several MBs of JS being sent over the wire either... I know that is also a thing, not to mention poor state mgt, which is also a regular occurrence (most angular apps have many state bugs).
Personally, I'm not too bothered by client rendered/driven apps... They can often be split accordingly, and it's reasonable to stay under 500kb JS payload for a moderately sized web application. Not that everyone has as much awareness. I think of the browser as a thick-ish client toolkit as much as it is a rendering platform. That doesn't mean SPA is the right way for everything. I wouldn't do it for a mostly text content site (news, blog, etc). And most of my work is web based applications, not text driven content sites.
You're not wrong, you have to be pretty invested to keep up to date with most technologies.
There's even a new crop of frameworks coming up now, like Astro, Qwik, Solid, etc. Plus a paradigm shift towards MPA and abstractions like server components.
I agree 100%. Framework devs forget that the rest of us have real jobs to do and can't spend weeks updating to the latest version of the framework every 6 months.
The job of framework devs (at least the ones working for companies on those frameworks) is to make it seembusy and evolving, even if that’s not needed; if a framework is ‘stable’ and ‘done’, contributions drop fast and the project dies. People, for some weird reason, don’t see it as an advantage where a project on GitHub has only security and dependency updates but is done feature wise.
.NET is trying to move fast and break things, however most devs are using it like they had in the past 15 years (with some new best practices for c# and frameworks); it's lovely working on that. Updating libraries is painless, even for projects > 5 years old. Try that with npm... After 5 weeks.
There's an effect (sic) from having open source and ubiquitous internet that makes everybody react (sicsic) to everybody whenever something is seen as the new way.
Before that you had 2-3 years of stable blindfolds before knowing what the other teams cooked secretly.
The Angular API has been pretty stable for the past 6-7 years since v2 (well, since v4 if you count the Router changes), and the upgrade process between major versions is pretty smooth. Especially since the Ivy release, they've matured a lot... That's one of the reasons I still prefer it over React, especially in enterprise settings. (Although with their latest versions they are again trying to keep up with the cool kids by introducing standalone components and signals... not sure what to think about that yet)
Angular lost me going from 1-2, for example. Not because Angular 2 was bad, but because they decided they needed to do a fresh start. Vue2 -> Vue3 is almost the same story, though I stuck with that one. Rails? Oh boy...PHP? That is a damn nightmare. Tell me without looking at the docs what the best practices should be for even the most basic stuff.
If I have to rewrite large portions of my code for a new revision, even a major one, you are most certainly doing it wrong. If you want to rewrite the project, at least care to add a compatibility layer (which some projects have done, but most have not).