Note that MPAs can also opt-in to prerendering, where the browser spins up a hidden background tab (sort of) that renders the next page, and then gets activated when the user clicks a link. This gives truly instant-feeling navigations. More detail at https://developer.chrome.com/blog/prerender-pages/ . For now page-to-page prerendering is Chrome only, but since it's a progressive enhancement that seems fine. (Safari does prerendering from the URL bar as well.)
SPAs can get this sort of thing too, but it's less automatic and generally requires framework support for keeping track of chunks of not-yet-visible, non-user-interactable inert DOM and matching them up with link clicks. Whereas with MPAs, because you're using the standard page rendering and link paradigm, the browser can manage for you.
Source: engineer on Chrome working on this feature :)
> Note that MPAs can also opt-in to prerendering, where the browser spins up a hidden background tab (sort of) that renders the next page, and then gets activated when the user clicks a link. This gives truly instant-feeling navigations.
I recall Opera doing this sort of thing around two decades ago. It had a setting to aggressively cash all links, and navigating between those was instantaneously.
As the sibling comment notes, this would unfortunately be too aggressive. In addition to the problems with non-idempotent GETs, doing full prerendering means downloading all subresources and running all JavaScript, so things like analytics or tracking pixels or "update the user's most recently visited document list" in a document-viewer app might get triggered as well. That is, in reality while a HTTP GET might be idempotent on a well-designed page, very few people design their page so that a HTTP GET + running all the page's on-load JavaScript is idempotent.
To solve this, for the modern implementation you can read about in the article, (1) prerenders are limited to same-site pages, to avoid cross-site identity joining; (2) prerendering is opt-in on the referrer side (and, for cross-origin same-site cases, on the destination side as well).
This unfortunately means only sites that are willing to put in the extra work, get the speed benefit. In particular, right now the effort of auditing all your third party libraries is probably the biggest cost. We're looking into what kind of providers we can work with to reduce this, by evangelizing them into becoming prerendering-aware (e.g., not logging analytics while being prerendered).
I remember reading once (but now I can't find the details, this may be apocryphal): a few browsers did try this for a while, but it turned out be a disaster for certain sites. If, for example, you have a page loaded with a list of items, each of which has a delete link next to it (think PHPMyAdmin): what happens if the browser automatically sends a get request to every link it sees in order to prefetch the content? The server will just see this as a user clicking a link to the delete page and automatically delete the item.
The solution to this is obviously that servers should obey the rules of HTTP methods better: GET is idempotent and doesn't change state, so it should be impossible to delete data just by clicking a link. Instead, you should at least need to submit a form via a POST request in some way.
In practice, though, browsers need to deal with badly designed servers, which is why almost all of this prerendering, precaching, etc stuff requires an explicit opt-in from the website.
> SPAs can get this sort of thing too, but it's less automatic and generally requires framework support for keeping track of chunks of not-yet-visible, non-user-interactable inert DOM and matching them up with link clicks
Do you know of a React library for doing this? tanstack react-query can prefetch [0] but it just prefetches a REST API, it doesn't prerender in an invisible DOM node or something (also "prerendering" in React usually refer to server-side rendering of HTML text, not to prerendering something in the browser, so this seems a bit hard to search for)
The SPA vs server side rendered page debate is so exhausted at this point. The tech is such that they’re nearly equivalent if you have a good team that understands the appropriate technology.
You can have an SPA with a proper router that respects page history, loads html fragments from a server, remembers positions on reload, etc.
You can also have a server side rendered page sprinkled with JavaScript and use web sockets to create dom element level transitions and interactivity
The “MPA” described in the article has always been possible in a “SPA.” It’s more about the UX of your app than anything else
Your second paragraph essentially is creating a little mini browser, and you have to make sure to get every detail right or you’re back to the uncanny valley of SPA web pages. Performance can even be worse with SPA “page” navigation if multiple round trips are needed for the data on the next page.
SPA’s should be used web apps (think slack or Asana), and stick to sever rendered for web pages.
I think you've highlighted a big problem with SPAs in that they can encourage multiple API calls per user action. IMO, you should probably have a single endpoint per thing a user might do that might combine several conceptual APIs. i.e., if you have an API to add something to a list, have that API also return the current list.
They have a waterfall of multiple API requests by the browser to display a single page. That means the front end has a bunch of logic to aggregate and piece together requests. This in itself isn’t a big deal, if you move to doing it on the backend you still have the same logic and need to make the same sub requests.
Where it does become problematic is it leads to fat API endpoints sending far more data to the client than is needed. GraphQL lets you choose what fields you query but odds are there’s still data available you can ask for that shouldn’t be asked for and sent to a browser.
If your requests/aggregations are done serverside it gives you a better chance of filtering out data that should never be accessible in the browser. You can also make full use of Cache-Control headers and backend caching.
Creating individual endpoints for web apps went out of fashion as it involves more effort and people don’t like effort to do things well. I see it re-invented recently as backend-for-frontend.
> Creating individual endpoints for web apps went out of fashion as it involves more effort and people don’t like effort to do things well. I see it re-invented recently as backend-for-frontend.
Not true, individual endpoints requires more work for client side optimizations but the CRUD query mapping effort is not reduced with graphql. If anything, graphql endpoints require more effort on the backend side to build because of having to wire up the resolvers unless you are using something like Hasura.
> SPA’s should be used web apps (think slack or Asana), and stick to sever rendered for web pages
Everyone seems to agree on this, but if this is the case than i’m very confused why this is still a debate. The vast majority of developers are making apps, not “pages”. Pages are useful for a handful of well understood applications that already use CMS systems (blogs, ecommerce, forums etc). So why are we all pushing for MPAs instead of better apps?
> and you have to make sure to get every detail right or you’re back to the uncanny valley of SPA web pages
You can’t get every detail right, because the browser doesn’t expose some of the pieces that would be required to do so—even apart from details that vary between browsers. The most obvious example of unexposed functionality is the loading indicator, but there are more around things like slow or failing networks.
No, it's not exhausted - people and businesses pushing heavyweight frameworks stand to lose out, and trying to silence the debate would help them.
React seems like a bubble to me, created by fancy code that seems good at first glance but doesn't make a huge difference. What's it supposed to be amazing at, simple stuff or complex stuff? If simple stuff then why are people downloading 200K for Hello, World? If for complex stuff why isn't something like VSCode, Monaco, CodeMirror written with React? Those have state galore, and React's one-way data binding is no revelation for them. In fact inside React things like the ironically named react-hook-form get around React's one-way data binding by letting the form inputs be uncontrolled* (ironically named because react hooks makes it sound like it's going with the grain of react when it's thankfully going against it). Maybe ChatGPT will pop the bubble by churning out vanilla js that beats it. :)
React 16 is about 5 KB uncompressed, it’s a tiny agnostic library that does very little and can be used to render to anything from a DOM to a native GUI to a dot matrix LCD screen (https://github.com/doodlewind/react-ssd1306/blob/master/docs...). Together with React-DOM it is about 100 KB, 30 KB gzipped (DOM library is bulky in part because it needs to support all of the default HTML and SVG elements). It’s not supposed to be amazing at anything except what it does, build complex or simple stuff it is up to you.
I disagree. Every time something is hyped that extends the time it can be called a fad*. React gets hyped up freshly again with every announcement from Vercel**.
* well maybe a collection of fads rather than a fad in itself
** I say this lovingly, it's similar to the magic of Steve Jobs and Apple. That said Vercel wouldn't interest me that much at this point if it was React-only. But the DX works for many different sorts of projects.
For there to be a bubble, it has to be pierced. We haven't seen that with React. We did see that with Angular. I think the reason why the bubble did burst with Angular is because it made many opinions about things, and the more opinions you make, the fewer people in aggregate are going to agree with your total outlook. It's easy to convince people of 1 thing, and less easy to convince people of N things. React being less opinionated and not a batteries-included option means it won't be going away for awhile.
> React being less opinionated and not a batteries-included option means it won't be going away for awhile.
That's what they love to say, but it has never seemed that way to me. One way data flow seems to be a huge opinion.
Also dangerouslySetInnerHTML, htmlFor, className, etc, etc.
That last one was handled by the community, partly coming from Facebook, with stuff like GraphQL, but now they've gone more with batteries included route even on their website by suggesting to use a big framework: https://react.dev/learn/start-a-new-react-project
Also their opinions don't align with those of a lot of the community. A lot of glitches stem from controlled components.
A more fruitful way to think about the degree of opinionated would be to compare to alternatives. If "className" versus "class" is too much 'opinion' maybe vanilla JS is best for your eclectic taste. Also, you'd absolutely find stuff like Angular to be maddening in its degree of opinionatedness.
It isn't just the framework that's opinionated, it's the people behind it. An example is their website where they don't mention create-react-app at all here: https://react.dev/learn/start-a-new-react-project They could have added it so as not to confuse someone who had heard about create-react-app and understandably expected to see it on a page called "start a new react project" but they prefer to blot it from memory...
Is the discussion about frameworks or just React? They seem to be treated as interchangeable but React's foundations are not shared among other frameworks.
It's really about hype. I think React is good but overrated.
Edit: I thought about it some more and the problem is that in order to see adoption, frameworks need to convince devs that they need something. With Vue what you have is reactive() and ref(). So they're convincing a lot of devs that they need MobX. Where in reality most devs don't need React's useState one-way bindings and they don't need MobX-like reactive objects either, nor do you need Signals from Solid. That said, if you prefer one of these, go for it!
Ergonomics of binding events and setting attributes/properties/styles is useful and I like how Lit, snabbdom, and Svelte provide ergonomics without one of these state management paradigms.
React is perfectly hyped, the introduction and complete absorption of flux architecture proves this. It's changed how we even write vanilla javascript. Most of the lessons we're talking about right now about questioning whether we even need state management paradigms to such a degree would not have happened without React.
So if a problem exists about this scenario, it isn't about React, but how teams decide on tools. And unfortunately, most teams that aren't building their own solutions, go with the tool that has the largest "safest" community, not the one that accurately solves their issue.
That's not how I remember it. People were thinking a great deal about state management with Backbone, Angular.js, and Knockout.js. It intensified with React. I wonder what would have happened with Angular.js if not for React...
I have played thousands of bullet chess games on Lichees and is and was very snappy and used to be based on Mithril so it's very battle tested in a literal sense.
FWIW I disagree that something like virtual DOM can be pure overhead. It is useful to have virtual just about anything. Virtual filesystems for instance.
> No, it's not exhausted - people and businesses pushing heavyweight frameworks stand to lose out, and trying to silence the debate would help them.
What do you think about frameworks like Leptos that take the best of both worlds?
> progressively-enhanced single-page apps that are rendered on the server and then hydrated on the client, enhancing your <a> and <form> navigations and mutations seamlessly when WASM is available.
I think they're pretty neat, and that Svelte is one.
"I used this technique in my own emoji-picker-element. If you’re already using Svelte in your project, then you can import 'emoji-picker-element/svelte' and get a version that doesn’t bundle its own framework, ensuring de-duplication. This saves a paltry 1.4 kB out of 13.9 kB total (compressed), but hey, it’s there. (Potentially I could make this the default behavior, but I like the bundled version for the benefit of folks who use <script> tags instead of bundlers. Maybe something like Skypack could make this simpler in the future.)"
It’s good when you want interactivity that’s technically straightforward but in high volume.
It doesn’t work for code editors because they need very close interaction with the DOM, whereas React provides an abstraction layer that covers the common cases.
It works but clearly it's not needed, and to me I don't see a whole lot of innovation from a computer science standpoint because the most complex js projects don't use it.
1. Building apps with manual mutations (which is what we did before in the jQuery era). This involves a lot of boilerplate code, and it is hard to ensure that the UI state stays in sync with the app state properly.
2. Re-render the world on every state change. This works and allows you write simple code that is react-like in that it is pure transformation of app state into UI state, but it's slow and quickly runs into performance issues even in small apps.
1 still makes sense in the most performance sensitive scenarios. And 2 can work for the very simplest apps. But React and similar frameworks are great for everything in between.
> I don't see a whole lot of innovation from a computer science standpoint because the most complex js projects don't use it.
That seems silly. You could make the same argument for something like SQL. The most complex data manipulations won't use it. But it's still useful for the 90% that aren't that complex.
Screen readers don't read the hierarchy of elements back to you. They find the text that renders inside the elements and read it out loud to you. And for navigating between elements, the level of nesting is much less relevant than having the correct aria-role assigned to each element that contains an interactive component.
To be honest, I think people tend to overestimate just how long it takes to make a nice looking button or write some flexbox styling. Even standard form components are really not too time consuming to style.
What pushes me towards using third party code is stuff like autocomplete search with drop-down selects, mostly because I don't want to mess up on the accessibility front, either keyboard navigation or screen readers, and there's at least a few that have that part figured out already.
The idea of components is fine. I should have been clear, there. The problem typically comes from the authoring tools. To have the hooks necessary to put the decoration that we want, in html, they typically add a ton of div elements. When, realistically, you could almost certainly get what you want with very minimal markup.
And being fair, I'm sure this has gotten a bit better in recent years. But the Rube Goldberg efforts people would put in to get the "flow" of the browser to automatically place things in locations that were easily calculated is frustrating.
Because in an MPA a navigation to a new state is obvious: the page reloads, the screen reader can inform the user and start again from the top.
In an SPA clicking on something could replace the content in some other area of the screen - but if you don't add the right additional code the screen reader user has no way of understanding what just happened.
I'm really enjoying using htmx nowadays to sprinkle interactivity on server-rendered pages with almost no JavaScript. Its extra attributes make it easy to do common interactions on pages. Highly recommended 12 KB gzipped zero-dependency JavaScript. Example:
<button
title="Remove 'ABC' from cart"
hx-delete="/cart/123/item/abc"
hx-target="#item-abc">
Clicking this button will trigger a `DELETE /cart/123/item/abc` request and swap the response into the element selected by the `hx-target` selector. The response would be an HTML fragment. So instead of an API serving JSON you would have an API serving...HTML. It's a neat fit with any server rendering technology.
I think it's worth calling out accessibility directly, in particular screen reader compatibility.
I have yet to find a great guide to making SPAs work well with screen readers that goes beyond "read the ARIA spec" - but the ARIA spec isn't actually that useful for understanding the nuts and bolts of how you should build things so that e.g. screen readers know when the SPA has navigated to a new page, or loaded fresh content in a smaller page region.
My understanding is that MPAs are, by default, massively more accessible than SPAs. But my experience is that SPA authors rarely seem to indicate that they care.
You need to have a first-class understanding of accessibility to not have a11y problems. If your MPA framework allows you to define an "onClick" handler with a div tag, you're back in React's set of accessibility problems (versus using a button element, etc).
Poorly phrased. React has no inherent a11y issue. People misapply it too frequently due to a lack of fundamental knowledge. MFA's cater towards building less interactive content, and more so static pages, so they are less likely to have a11y bugs.
I wouldn't be surprised one bit if Angular apps failed a11y audits at similar rates to React apps. But it isn't a problem with SPA's as an architectural concept, the problem is uneducated engineers.
Using a framework that is accessibility friendly is probably the best first step. Vuetify has gotten good feedback, with little to no additional steps for my implementations. If I'm remembering correctly, Tailwind's Headless UI, Chakra, and a few others are in the same boat.
Searching for "a11y [framework]" on GitHub gives good results and some "awesome" pages that link to resources on the subject.
I see comments like this a lot and it always makes me wonder if we have collectively forgotten just how damaging IE was the web or if our default vilification of a browser we don’t like is to just compare it with IE.
I never said chrome was not damaging. In fact, it’s damage is likely to be more widespread. The underlying point here was that it feels like we’ve picked an apples-to-oranges comparison when we say just how bad chrome is when compared to IE. Both were popular for different reasons, during different points of the internet, and are damaging in different ways.
If you’re talking about the View Transitions API, it’s a fairly unimportant progressive enhancement that wouldn’t even be a good idea for most sites. It doesn’t matter if the browser doesn’t support it.
The issue with large JavaScript files is the overhead of the browser parsing, interpreting, and executing of the JavaScript. The network transfer size has nothing to do with it.
Just because video will be a much large file size, does not mean that text and forms should have to be, too (unless you are writing an entire book, I suppose).
What confuses me the most about these articles is the idea what ppl jumped on these frameworks just to chase new shiny things. There is always rdd going on but did these frameworks really not improve anything about the web experience for users? My career started around the same time react and angular started to take off. Why did we move away from mpas if they were so much better?
ok so how did they get so much market share? cargo culting from fbook? or was it the appeal of pushing computer to the client like ther other commenter said?
The shift to SPAs happened around the same time as mobile browsing was becoming the norm. Phones were alright, but data speeds were really slow.
So instead of having to download an entire HTML page and all the assets used every time you navigate, people built SPAs. Download everything up front, and then fetch data with small API requests. This way even though first load might take a while, everything is cached, and the client just needs to query the API for data.
The other side of it is that websites used to be pretty simple, and as browsers evolved past IE10, the kinds of things people build on the web also got more complex.
I think the view transitions API will be good, but it's currently only available for Chromium browsers so it is not really a competitor to SPAs for the time being. I'm also kind of opposed to really broad generalizations like this since what is most effective currently kind of depends on what you're building. A calculator built as an MPA would be kind of ridiculous.
Nobody ever talks about the monolithic all-in-one framework upgrade? You really haven't come to terms with SPA until you've done the all-in-one framework upgrade, man. It's like running thru a hailstorm of razor blades
Devs who push for monolith all-in-one SPA frameworks are seldom ones who stick around long enough after shipping to experience what you're discussing. They're far more concerned with the new framework hotness vs. building a lasting and consistent user experience.
Remember the explosion of front-end tools no one uses anymore?
They usually follow the pattern of 3 seconds of hype followed by 10 years of digging out a mountain of technical debt from some asshole who has since moved on.
> Search engines use site performance as a search ranking signal, which makes good performance crucial.
So this is probably wildly naive, but I find myself wondering why SEO is so important still in 2023. Like, how much traffic is coming from Google that converts to paying customers or clients? I suppose it's industry specific, but for my friends that have sold product on e-commerce sites, the best way to make money seems to have been find a way to post on reddit for a given niche community without getting banned, and Instagram ads. Maybe Facebook ads.
For a SAAS, I'm really curious how much converted traffic comes from Google. When I need a SAAS I usually first ask all my friends, then start searching "service_i_need reddit sysadmin" to see what other people are using and their experience. Maybe I'll read a listicle from an aggregator i trust.
Also, you can just pay Google to be at the top of whatever search results, so you can Optimize by just giving them money.
The exact data on how much search traffic converts varies wildly from business to business so it’s hard to give a ballpark figure. The big win from SEO traffic that I’ve seen is that CAC for these customers tends to be a lot lower than for customers coming in via ads.
Paid advertising and posting on social media has its place, but there are a LOT of people using Google and getting to rank 1 for important keywords usually works out cheaper than paying for the ad space.
Sure, but I'm asking, who's being found these days on Google? It seems to me people are being found on listicles, reddit, instagram, facebook, and various word of mouth channels.
> Inclusive and robust by default, Energy-efficiency, Privacy-focused
I imagine trying to use these reasons with executives and getting laughed out of the room
Single page applications do not have to be loaded all in advance:
function main_state_changed(aState) {
main_state_display.innerHTML = loadFileFromServer(aState);
}
this way your app loads parts on on need basis. As for tooling - I do use minification but the rest is plain html, css and javascript and couple of selected libs. No frameworks and works wonders for me.
You can load any file from server into string. Use for example fetch call to do it. You can then assign innerHTML property of any DOM element of your choosing to that string [0]. The string can contain any valid HTML and JavaScript. This makes your front end modular and scalable. It is dead simple and does not require anything but plain JavaScript.
It is amazing how modern web developers flock to that mess of frameworks and tooling to achieve what basic JS and no tooling do way more efficiently and with less time spent.
Interesting idea, though I'm wondering for example how you manage namespaces, or for example one component extending or using another one. Do your various modules just have lots of checks about certain valid states on the global namespace?
You can do some reading about state machines and their executors if you are curious about ways of managing states.
I do not keep to single tool / strategy etc and use whatever I believe is the best choice for particular task. I write everything: firmware for MCU, multimedia apps with hardware accelerated graphics, enterprise backends and middleware, web front ends, device control and whatnot, so while some tech is universal the specifics can be highly different.
Oh, I'm aware of the theoretical concepts of state machines etc, I guess I was asking about more in-the-weeds levels, but it sounds like you really and truly are just approaching these problems from engineering fundamentals and coding from scratch. That's pretty cool. All you've done is just pique my curiosity further though, I would love to be able to work on such a wide breadth of fields, are you open to sharing how you get to work in such a variety of platforms? I'm at the point in my career where I'd like to aggressively move off all the web-only work I've done the last half decade or so.
>"I would love to be able to work on such a wide breadth of fields, are you open to sharing how you get to work in such a variety of platforms?"
I guess we have different education and general background. I am 62 yo fart. Was raised in former USSR in a satellite town that existed soleily to accommodate bunch of scientists working in various fields. When not in school I would spend my time visiting my parent's place of work and got to play with very cool "toys" and people there were nice enough and let me "work" on tiny projects of various nature. I then done my M.Sc. in physics / biophysics and continued to work as a scientist. So I was basically trained to solve various problems in creative manner. Was never trained specifically in software.
I had wonderful teachers and mentors and can't ever express enough gratitude to them. Many of them are dead by now anyways.
During a course of my work as a scientist I needed to design various mechanical gizmos, electronics (digital and analog) and write some software since nothing existed for my particular field. I just got myself whole bunch of books and read and tried until I succeeded.
Then "perestroyka" started and I began to supplement my "official" income with writing a software as an independent consultant. Computer industry back then was very young, and I had no troubles finding contract work to create various products. My first side software project was PC based visual designer (something akin to music sequencer) to control lighting system for big theater.
I then immigrated to Canada and eventually ended up working basically as chief software product designer and implementor. First as an employee in software development company and then on my own.
As for diversity of the fields - when working on particular product I imagine software as a bunch of living components interacting with each other. I then nail down what those components and interactions are. I then get down to implementing it. Particular computer language, deployment platform and other pesky details are the last thing I worry about.
Over the years I've accumulated large portfolio of products and present those as a references when getting new contracts. I mostly deal with business owners and those do not give a flying fuck about languages. They want actual problems solved.
>"coding from scratch"
I do extensively use various libraries. But very specific problem oriented. Like graphics libraries. I avoid frameworks like a plague most of the time as those in my opinion degrade creativity and keep one hostage.
Angular allows you to split your code into modules and then load them on-demand. For example, we have a module for the admin pages, which is only loaded when routing to one of them, which means that module is never loaded for non-admin users.
> Single Page Application frameworks in the last few years have pivoted away from client-side rendering to server-rendering and we welcome this improvement. However, the large starting size of client JavaScript bundles customary to SPA persist: Remix (228 kB), Next.js (248 kB), Gatsby (210 kB), and Nuxt (191 kB).
Good point, I also think stimulusreflex.com deserves some highlight in this case too. This is where it shines!
The benefits they list for classic multi-page applications are good. Even disregarding searching and caching, the other stuff listed there I care about sufficiently (to work even if scripts are disabled, or with extensions, with other web browsers, accessibility, back/forward/scrolling, energy efficiency, etc), that the classic multi-page applications are good.
The organizational and maintainability benefits you get from using something like react far out weigh the actually quite minor file size bump. That is the least of your worries.
source: myself; someone who moved to working on a project entirely written in vanilla JS… which has just become its own bespoke framework.
It also benefits users in that you can implement more features and fix more bugs in less time. Performance is a feature and what that means is that while you can't just completely ignore it, it's not always the highest priority.
Can't think of any reason why this got downvoted, except that clearly the ReAnguVue mafia is omnipresent these days. Vanilla JS (and even that in moderation) FTW.
I mostly agree, but in most cases, you should not even need client-side document scripts at all. (In some cases, making optional features with scripts can be helpful. In much rarer cases, making it mandatory might be needed, but even then there should be ways to download the data or access the protocols even though JavaScripts is disabled.)
SPAs can get this sort of thing too, but it's less automatic and generally requires framework support for keeping track of chunks of not-yet-visible, non-user-interactable inert DOM and matching them up with link clicks. Whereas with MPAs, because you're using the standard page rendering and link paradigm, the browser can manage for you.
Source: engineer on Chrome working on this feature :)