I'd say for most web content, minimal JS gives the best experience. With all the integration and UI improvements browsers have gone through the past ten years, all most content needs is some CSS to add a theme and some minor readability tweaks, except if there's some interactive animation that add value to the content on the web page. You don't even need JS for videos or audio anymore, it's all there in the browser.
It's sad how most websites these days are fully-featured web applications. Take this blog page by Google. A simple web page with some pictures loading over 300kB of Javascript libraries and tooling. And this isn't even an exception. It's sad, really, and a waste of readers' bandwidth.
The principle of least power still applies in the FE space. If you can get away with rudimentary technology, do it.
The problem I find is usually misaligned incentives. When you go to BuzzFeed you just want to read an article, but BuzzFeed wants to track you and sell you ads. It's not in their best interest to keep it simple.
> The Web framework that has decades of experience in large scale production deployments and has the ability to pack FE libraries as components.
Same question. Who's going to write all those 'components' wrapping libraries for each of those when everyone has already moved on to react or vue? Some disgruntled retrograde? Am I supposed to waste time on it so that you can bask in the glory of 'pure html and js'?
I mean I don't like JS either but the whole "fad of the day" thing is getting pretty old don't you think? It's been six years since React was first released.
Vue/React is moving heavily into SSR which is the future of web development IMO (37signals called it long ago). It's the best of both worlds of old staticy HTML + interactive components.
I used to look down on something like Next.js/Nuxt.js [1] for simple (and advanced) websites until I used it and I'm 100% convinced it's the future of web development. Where most of the content is built with components but pre-rendered server-side and the JS loads to 'hydrate' the interface with interactive elements. I've stopped using heavy frameworks like Phoenix as a result and switching to simpler backends APIs written in Rust/Ruby/etc with Vue SRR handling the full frontend.
Combined with treeshaking, chunked JS via webpack (which automatically only renders tiny .js files on-demand based on what components the page/route needs), purgeCSS which removes all unused CSS, PWA/critical inlined CSS tools, and other easy-to-use optimizations you can have a fully feature-rich JS site with SEO-friendly HTML pages and very fast page loads.
We're just seeing the beginning of Vue/React and it's going to improve the web from the ills of the past, not make it worse. Better performance and more importantly high-quality well composed codebases, plus TypeScript which helps JS applications scale in size while keeping them sane.
This is insightful and deep, but it begs the question:
Does my text only newspaper article really need all this crap? Like, it's text, with maybe one image or two. Just give me the damn text and cut out the cruft.
Why are browsers now competing with reader modes? We're sending a bunch of data just to have the browser remove it. Just don't send it to begin with!
"Consider whether static rendering or server rendering can get you 90% of the way there. It's perfectly okay to mostly ship HTML with minimal JS to get an experience interactive."
In other words, just serve some HTML so your users can read your content.
On the other hand, your newspaper writers still need to get paid and unless you're a subscriber, that means ads and cruft on your page for ads.
We are hosted in infrastructure that doesn't let us change this (although we are currently migrating a lot of the content off on to another site that we do control). Like many developers, we can only do what we can with the tools we have, but we are changing it.
This is an excellent point and something I'd like to see discussed more broadly. Gzipping and minimizing assets pales in comparison to simply not sending a bulk of the content to begin with. It would be great if browsers could set a header value when requesting a page, and if supported, the server could respond with content that's already in what would otherwise be the so-called reader format.
Because the ultimate goal of the site is not to serve you news. If it was, it'd just be an FTP server or a CDN. The site exists to track you and sell advertising space, end of story.
> Does my text only newspaper article really need all this crap? Like, it's text, with maybe one image or two. Just give me the damn text and cut out the cruft.
Petitio principii as the conclusion is a restatement of the premise.
Also it's a rant, not a formal logical argument. Go outside, pet a dog, drink a nice beer, stop wasting your time correcting modern vernacular.
This goes into great depth explaining what has become a tired argument. No, you don't need React for everything. Yes, there are lots of benefits to rendering static HTML if your interface doesn't have much dynamic state. And yes, React can still be a useful tool in the right context. Those who say you should use it for everything are naive. Hopefully your team lead who's making these decisions isn't naive. It's possible they are.
I really don't think that anyone who's been in web dev for more than a couple years still disagrees with the above points.
There aren't many people in the React community who are explicitly arguing that everything should be React. But there seem to be very few things that the React community believes are a poor fit for React.
Take addons.mozilla.org. It's a simple, tiny website with a list of browser addons and a button to download each one. Perfect fit for server-rendered HTML, right? Nah, front-end developers rewrote it in React. [1]
Or blogs. Blogs are just a collection of text pages with occasional images and links. Surely you should use static HTML for those, right? Well, okay, but only for the initial page load; after that, make sure you let React take over so it can "improve" your page navigation. [2]
You put "improve" in quotes as if instant page navigation without flickering is somehow worse than the alternative. Browse around this [1] website and tell me it is not a better experience than your bog-standard html + vanilla js concoction. Also every time this is brought up the developer/content editor experience is entirely neglected, when in reality it is a big consideration when creating something other than your own dev-blog that you update once a year.
I tried the site you linked on my phone. It is not a better experience than a standard HTML site:
1. The Back button had a noticable half-second lag.
2. When I temporarily lost connection and a link I clicked was taking a long time to load, I wasn't given the option to stop loading it like I am when I click a normal link.
I tried disabling JavaScript like a sibling comment suggested, and the website was indeed faster and still flicker-free. (Where do front-end devs get this idea that you need tons of JavaScript to do flicker-free page navigation? Have they never used Hacker News?) But then I wasn't able to see the interactive code examples on the homepage, because those genuinely needed JS to work.
This is because disabling JavaScript is an all-or-nothing proposition. There's no "throw out bathwater but not baby" button or "allow JavaScript only for things it should actually be used for, and not clumsily reimplementing browser features while forgetting half the edge-cases" toggle-switch.
> You put "improve" in quotes as if instant page navigation without flickering is somehow worse than the alternative.
Yes, instant page navigation without flickering is better than the alternative - instant being what you get from reloading a static HTML page, alternative being React.
Seriously, fetching, parsing and displaying a tiny bit of HTML is fast. As fast or faster than the React virtual DOM shenanigans, once you account for the computational load the framework adds on top of your simple page.
Real life is not code golf. We do not build things entirely for ourselves. We build things for our employers, for our co-workers, for our family members and for our friends. We build to create value for someone and value is not a one-to-one correlation with the weight of a site in KB.
If I can make it easier for me and anyone working with my site in the future by sacrificing 50-150 KB then I will take that deal any day of the week. That is what abstractions are for. That is why we do not work in byte-code. We pay for convenience in clock-cycles and disk space.
Arguably, using React where a static HTML with maybe a little bit of vanilla JS would do, is code golf.
HTML and CSS are not bytecode. They're high-level abstractions, hiding a very complex renderer underneath. Sometimes you need to build another tower of abstractions when this doesn't suffice - like when you're trying to build an application with complex GUI in the browser and you need an adapter between DOM and a more suitable GUI pattern. But displaying text and images communicating a message is not one of those cases.
We do not rebuild the tower of abstractions for every site that uses them. I do not personally implement React from scratch when I use it for any of my sites. I get all of the benefits for none of the effort. You do not seem to acknowledge factors other than straight up performance, as is evident by what you are picking and choosing from my comments, so obviously it is not attractive to you.
Sorry then, I guess I have no idea what this discussion was about in the first place. Why would anyone care about the build tools of someone else's site if it doesn't affect the user?
There is no clear line between "what can be obviously done with vanilla HTML/CSS" and "that problem definitely needs React". So, most people opt for something like React to enable the capability, in case they need it. There's nobody purely saying that 100% of everything should be in React, but that's effectively the outcome.
ALL (all) of the problems they face are either self-inflicted or due to other web developers, such as those who design web advertisements or bloated JavaScript libraries. The problems with sending multiple simultaneous streams to a browser are due to web developers deciding to design a web page with hundreds of individual files to be downloaded. Rendering times are long because of the complexity that today's sites are built to attain.
I can view simple sites EXTREMELY quickly today -- that is to say, sites that designers have not "improved" to the point that the site is no longer worth visiting. A simple page shows me what I want to see, and doesn't take a designer 400 hours to come up with.
All of today's problems on the web are due to web developers.
I'll fight and die on this hill, and no amount of downvotes will change my mind about this in the slightest.
A fun thought experiment: how would you go about incentivizing the clients who hire web developers to create fast loading pages but still have Javascript functionality? More importantly, who is in a position to incentivize clients who care more about tracking and marketing than performance?
Google has been factoring load times heavily into their search rankings for a while now. While I don't agree with a lot of their priorities, this gives a designer who's so inclined a lot of opportunity for encouraging clients to do the right thing: just show them their site's PageSpeed results, and they'll sober up.
What I don't like on current state of server side rendering with hydration is that server still sends initial state data, even thought, it is already contained within initial html. This means, that site is rendered twice on first load. First time from html and second time after initial state (some json) arrives as state atom is usually null at this time. Why can be initial state just reconstructed from html (data-attributes ...).
There are many nuances though. Some data/state may not actually be rendered but used for later. It's also not really accurate to say it's rendered twice, as it's rather hydrated (before hydration this used to be a big problem; sites actually rerendering/flashing when the js loaded/initiated).
I guess you could write "unrendering" code for each component that transforms HTML into its internal state (if that's even possible) and you'd have to keep that code path up to date with any changes to your rendering code. This sounds really error-prone compared to hydration.
It talks about Google Search and the Google tools pointed to are good at telling you how Google search will react. But, things like the Internet Archive and other search engines are not going to be as good with the Full CSR apps.
For those here who use other systems like DuckDuckGo that makes a difference. I really started to notice this when I visited some sites (not apps) via the Internet Archive to get old versions and found things broken.
PHP is not a templating system, it's a web-centric programming language. It doesn't support any of the features one would want in a modern templating system, such as auto-escaping or tainting of user supplied data, knowledge of the current rendering context, native support for HTML or XML entities, macros, template caching, etc.
All of these features and more have to be implemented as a third party framework for PHP to be effective as a templating system, which is why such systems exist.
Merely echoing HTML strings and wrapping variables in htmlspecialchars() does not a "templating system" make. If it did, then every programming language would also be a templating system as long as it could output a string of characters to the necessary port.
I was thinking about being able to write regular HTML with code <?php ... ?> and variable expansions like <?=$foo?> with uses like having a for loop over stuff (like, e.g., table records) and inside the body including another PHP file that is the HTML to write with <??> tags that reference to the object looped over.
It has been a very very long time since i used PHP for anything serious (or did any web programming at all) but to me that sounds like templates. Perhaps you are referring to something else with the word "template"?
Yes, but my original question was that it can be written that way and since it can written that way, why do you need to implement a template system when the language you are implementing that template system can also function as a template system itself? Isn't it a waste of time and resources?
>why do you need to implement a template system when the language you are implementing that template system can also function as a template system itself?
Because PHP, natively, doesn't provide the features that a modern template system needs to be safe, scalable or productive. PHP doesn't recognize HTML or XML as a type, or a "template" as a thing, automatically escape user supplied data (correctly, based on context) or correctly deduce content-type headers or route requests or any number of things one expects of a modern framework or templates. As a PHP application grows in complexity, you will, inevitably, find yourself reinventing frameworks. So might as well just use one that's already been proven in the wild and optimized.
All PHP has is string concatenation. It's "templating system" is blind string concatenation - that's it.
This area desperately does need terminology to be nailed down.. but.. I don't really like the proposed terminology. It's kinda hard to understand and follow. (The meat is in the table at the end of the article.)
If you actively use w3m/lynx. It's actually surprising to me how many websites can still be visited by non-js enabled browser. But I guess I have my bar way too low as I expected pretty much every website that I visit behave like an npm blackhole.
Did anyone else notice they used the term "uncanny valley" in the article for something totally unrelated to what that means? Or maybe they did mean it but the paragraph didn't elude to that very well. Or maybe they interpreted "uncanny valley" after hearing the phrase into what they thought it meant...
It's not unusual to see it in the context of UI as something that looks interactive but is not / looks done but feels wrong. Like the prank of replacing someone's desktop with a screenshot of itself :)
React and similar JS frameworks were created by a walled-garden companies that do not want most their content indexed by Google. Use it on applications and content you don't want indexed with search engines.
I'm working on a site right now that doesn't do any SSR—it's just an index.html with a bundle.js on it. Google indexes it fine and surprisingly we have great SERPs. 2nd result for "trillion trees"
It is perhaps possible that google indexed it from a previous version that did have html, or they are using a sitemap.
(I've just joined the project, and I intend to get them set up with proper SSR soon.)
I'm not sure I entirely agree - Facebook is mostly login-walled anyway and they developed React to suit the needs of a data intensive SPA. Why not just use a robots.txt to stop crawlers from indexing?
In a similar fashion to React itself, it's not that easy once routing, caching, context data, request or query parameters, data fetching and state management, environment variables, sessions etc all come into play.
React has a server-side rendering API. So, I find it difficult to rationalize an argument where FB purposefully launched React to get people to return virtually empty HTML documents from the server (at most, a script tag which "boots up" the page content) in order to make the task of indexing difficult.
Facebook indexes many sites themself for link preview functionality. Preview images, page title, description are all derived from open graph tags.
Really confused by the terminology. I assume in this context "rendering" does not mean translating an abstract scene description into pixels on screen, but something else.
"Rendering" in this context is turning an abstract representation of the content of your page (or state of your application) into HTML (either as bytes delivered in an HTTP response from a server, or as DOM nodes created by JavaScript on the client).
For Hacker News, "Server Rendering" is taking the top 25 stories, creating a large string containing all the HTML tags, then serializing that to a byte stream.
A hypothetical "Client Rendering" alternative to HN might be downloading a JSON chunk containing the data about the top stories, then using a JavaScript framework to create DOM nodes in the user's browser that create the same document client-side.
No; "Server Rendering" is creating an HTML document as text, then serializing it to bytes and delivering it in an HTTP response. There's no painting in this scenario.
This is great, it really proves that you don't need a lot of styling and interactivity to get a message across.
I'm sounding like my grandpa, but... back in my day, things were fast, simple and they worked Now we got this huge mess of technology just so we can track users, sell ads and annoy our site visitors?
Is there another name for "Trisomorphic Rendering"? There are not many hits. It seems like Nolan Lawson coined the term in 2016[1] (see quoted tweet, since deleted).
This is very disappointing to read. I encourage anyone to compare the architecture and DX of client side rendered apps based on Create React App to Next.js. Next.js is very immature and clunky in comparison.
a) From the way your comment is written, it's pretty clear that you actually just want to say what's between the brackets in your second comment. It comes accros as passive aggressive and doesn't contribute anything.
b) The page mentions who have written this article. The names, positions at Google and pictures are all there.
Some might even say it is better to mostly ship HTML with minimal JS.
ducks