Hacker News new | past | comments | ask | show | jobs | submit login
Rendering on the Web (developers.google.com)
138 points by kaycebasques on Aug 19, 2019 | hide | past | favorite | 98 comments



> It's perfectly okay to mostly ship HTML with minimal JS

Some might even say it is better to mostly ship HTML with minimal JS.

ducks


I'd say for most web content, minimal JS gives the best experience. With all the integration and UI improvements browsers have gone through the past ten years, all most content needs is some CSS to add a theme and some minor readability tweaks, except if there's some interactive animation that add value to the content on the web page. You don't even need JS for videos or audio anymore, it's all there in the browser.

It's sad how most websites these days are fully-featured web applications. Take this blog page by Google. A simple web page with some pictures loading over 300kB of Javascript libraries and tooling. And this isn't even an exception. It's sad, really, and a waste of readers' bandwidth.


css animations on :hover and :active psuedoclasses can go a long way for an interactive experience without running programs too.


The principle of least power still applies in the FE space. If you can get away with rudimentary technology, do it.

The problem I find is usually misaligned incentives. When you go to BuzzFeed you just want to read an article, but BuzzFeed wants to track you and sell you ads. It's not in their best interest to keep it simple.


For the sake of climate change, HTML with minimal JS/without JS is preferable.


For some markets, it is even a competitive advantage. :)


Who is going to maintain that loosely linked spaghetti of html with minimal js? You?

If you're not going to, don't tell me how to work with code I gotta work with.

Can we just agree to let this meme die? React and vue didn't appear because developers wanted bloat.

If you want to reduce bloat, take that to corporate greedy actors stuffing pages with tracking and don't ever touch tools that make my life easier.


The Web framework that has decades of experience in large scale production deployments and has the ability to pack FE libraries as components.

Spring, JEE, ASP.NET, Rails, Django,...


> The Web framework that has decades of experience in large scale production deployments and has the ability to pack FE libraries as components.

Same question. Who's going to write all those 'components' wrapping libraries for each of those when everyone has already moved on to react or vue? Some disgruntled retrograde? Am I supposed to waste time on it so that you can bask in the glory of 'pure html and js'?


The fallacy of that argument is that actually not everyone has drunk the react or vue Kool-Aid.

In a few years the React/Vue fad will be gone, while HTML/CSS/VanilaJS will still be around.

We are the ones not wasting time learning fad of the day stacks.


I mean I don't like JS either but the whole "fad of the day" thing is getting pretty old don't you think? It's been six years since React was first released.


I do like JS, served as VanillaJS.


> In a few years the React/Vue fad will be gone,

Certainly. However, something even better will replace it and you guys will moan about it again. It only hurts yourself if you refuse to learn.


It has worked pretty well so far.

Coffeescript, Dart, AngularJS, Dojo, MooTools, Prototype, Bower,...

So much time I saved keeping focused on SSR frameworks, alongside VanillaJS.


Vue/React is moving heavily into SSR which is the future of web development IMO (37signals called it long ago). It's the best of both worlds of old staticy HTML + interactive components.

I used to look down on something like Next.js/Nuxt.js [1] for simple (and advanced) websites until I used it and I'm 100% convinced it's the future of web development. Where most of the content is built with components but pre-rendered server-side and the JS loads to 'hydrate' the interface with interactive elements. I've stopped using heavy frameworks like Phoenix as a result and switching to simpler backends APIs written in Rust/Ruby/etc with Vue SRR handling the full frontend.

Combined with treeshaking, chunked JS via webpack (which automatically only renders tiny .js files on-demand based on what components the page/route needs), purgeCSS which removes all unused CSS, PWA/critical inlined CSS tools, and other easy-to-use optimizations you can have a fully feature-rich JS site with SEO-friendly HTML pages and very fast page loads.

We're just seeing the beginning of Vue/React and it's going to improve the web from the ills of the past, not make it worse. Better performance and more importantly high-quality well composed codebases, plus TypeScript which helps JS applications scale in size while keeping them sane.

[1] https://nextjs.org / http://nuxtjs.org/


This is insightful and deep, but it begs the question:

Does my text only newspaper article really need all this crap? Like, it's text, with maybe one image or two. Just give me the damn text and cut out the cruft.

Why are browsers now competing with reader modes? We're sending a bunch of data just to have the browser remove it. Just don't send it to begin with!


That's what the article is pushing for:

"Consider whether static rendering or server rendering can get you 90% of the way there. It's perfectly okay to mostly ship HTML with minimal JS to get an experience interactive."

In other words, just serve some HTML so your users can read your content.

On the other hand, your newspaper writers still need to get paid and unless you're a subscriber, that means ads and cruft on your page for ads.


And of course the article ignores its own device and is a 1.4MB page containing 10x more JS than HTML.


We are hosted in infrastructure that doesn't let us change this (although we are currently migrating a lot of the content off on to another site that we do control). Like many developers, we can only do what we can with the tools we have, but we are changing it.


Ads can be done right. It’s rare, but it can be done.


This is an excellent point and something I'd like to see discussed more broadly. Gzipping and minimizing assets pales in comparison to simply not sending a bulk of the content to begin with. It would be great if browsers could set a header value when requesting a page, and if supported, the server could respond with content that's already in what would otherwise be the so-called reader format.


Because the ultimate goal of the site is not to serve you news. If it was, it'd just be an FTP server or a CDN. The site exists to track you and sell advertising space, end of story.


The thing that bothers me is we should be able to do lightweight SPAs with server-side rendering.

There is no reason why the following MUST be done client-side:

    var label = new Label();
    var result = await server.getResult();
    label.setText(label);
It was this that led me to invent this for one of my own products.

Initially, I had thought server-side React would do this, but apparently it's only for first-load.


> begs the question

raises the question. Sorry, it's a pet peeve.


> Does my text only newspaper article really need all this crap? Like, it's text, with maybe one image or two. Just give me the damn text and cut out the cruft.

Petitio principii as the conclusion is a restatement of the premise.

Also it's a rant, not a formal logical argument. Go outside, pet a dog, drink a nice beer, stop wasting your time correcting modern vernacular.


This goes into great depth explaining what has become a tired argument. No, you don't need React for everything. Yes, there are lots of benefits to rendering static HTML if your interface doesn't have much dynamic state. And yes, React can still be a useful tool in the right context. Those who say you should use it for everything are naive. Hopefully your team lead who's making these decisions isn't naive. It's possible they are.

I really don't think that anyone who's been in web dev for more than a couple years still disagrees with the above points.


You're arguing with an imaginary straw man. Is someone credible in the React community saying everything should be be done in React?


There aren't many people in the React community who are explicitly arguing that everything should be React. But there seem to be very few things that the React community believes are a poor fit for React.

Take addons.mozilla.org. It's a simple, tiny website with a list of browser addons and a button to download each one. Perfect fit for server-rendered HTML, right? Nah, front-end developers rewrote it in React. [1]

Or blogs. Blogs are just a collection of text pages with occasional images and links. Surely you should use static HTML for those, right? Well, okay, but only for the initial page load; after that, make sure you let React take over so it can "improve" your page navigation. [2]

[1] https://blog.mozilla.org/addons/2017/10/25/test-new-look-add...

[2] https://www.gatsbyjs.org/blog/2017-07-19-creating-a-blog-wit...


You put "improve" in quotes as if instant page navigation without flickering is somehow worse than the alternative. Browse around this [1] website and tell me it is not a better experience than your bog-standard html + vanilla js concoction. Also every time this is brought up the developer/content editor experience is entirely neglected, when in reality it is a big consideration when creating something other than your own dev-blog that you update once a year.

[1] https://reactjs.org


I tried the site you linked on my phone. It is not a better experience than a standard HTML site:

1. The Back button had a noticable half-second lag.

2. When I temporarily lost connection and a link I clicked was taking a long time to load, I wasn't given the option to stop loading it like I am when I click a normal link.

I tried disabling JavaScript like a sibling comment suggested, and the website was indeed faster and still flicker-free. (Where do front-end devs get this idea that you need tons of JavaScript to do flicker-free page navigation? Have they never used Hacker News?) But then I wasn't able to see the interactive code examples on the homepage, because those genuinely needed JS to work.

This is because disabling JavaScript is an all-or-nothing proposition. There's no "throw out bathwater but not baby" button or "allow JavaScript only for things it should actually be used for, and not clumsily reimplementing browser features while forgetting half the edge-cases" toggle-switch.


> You put "improve" in quotes as if instant page navigation without flickering is somehow worse than the alternative.

Yes, instant page navigation without flickering is better than the alternative - instant being what you get from reloading a static HTML page, alternative being React.

Seriously, fetching, parsing and displaying a tiny bit of HTML is fast. As fast or faster than the React virtual DOM shenanigans, once you account for the computational load the framework adds on top of your simple page.


Real life is not code golf. We do not build things entirely for ourselves. We build things for our employers, for our co-workers, for our family members and for our friends. We build to create value for someone and value is not a one-to-one correlation with the weight of a site in KB.

If I can make it easier for me and anyone working with my site in the future by sacrificing 50-150 KB then I will take that deal any day of the week. That is what abstractions are for. That is why we do not work in byte-code. We pay for convenience in clock-cycles and disk space.


Arguably, using React where a static HTML with maybe a little bit of vanilla JS would do, is code golf.

HTML and CSS are not bytecode. They're high-level abstractions, hiding a very complex renderer underneath. Sometimes you need to build another tower of abstractions when this doesn't suffice - like when you're trying to build an application with complex GUI in the browser and you need an adapter between DOM and a more suitable GUI pattern. But displaying text and images communicating a message is not one of those cases.


We do not rebuild the tower of abstractions for every site that uses them. I do not personally implement React from scratch when I use it for any of my sites. I get all of the benefits for none of the effort. You do not seem to acknowledge factors other than straight up performance, as is evident by what you are picking and choosing from my comments, so obviously it is not attractive to you.


It seems to work fine under umatrix set to "kill everything but first-party images and css".


Perfect! So we can have the DX of React with no downside for you. Win-win, right?


Wouldn't it be easier to just use a static site generator for a simple site like this, though?


It is built on a static site generator based on React, it's called GatsbyJS.


Sorry then, I guess I have no idea what this discussion was about in the first place. Why would anyone care about the build tools of someone else's site if it doesn't affect the user?


They shouldn't, but they do.


There is no clear line between "what can be obviously done with vanilla HTML/CSS" and "that problem definitely needs React". So, most people opt for something like React to enable the capability, in case they need it. There's nobody purely saying that 100% of everything should be in React, but that's effectively the outcome.


Web developers amaze me.

ALL (all) of the problems they face are either self-inflicted or due to other web developers, such as those who design web advertisements or bloated JavaScript libraries. The problems with sending multiple simultaneous streams to a browser are due to web developers deciding to design a web page with hundreds of individual files to be downloaded. Rendering times are long because of the complexity that today's sites are built to attain.

I can view simple sites EXTREMELY quickly today -- that is to say, sites that designers have not "improved" to the point that the site is no longer worth visiting. A simple page shows me what I want to see, and doesn't take a designer 400 hours to come up with.

All of today's problems on the web are due to web developers.

I'll fight and die on this hill, and no amount of downvotes will change my mind about this in the slightest.


I agree with you. But the problem is web designer clients _demand_ all the darn garbage on their site.

I have the scars to prove it... currently dying on said hill.


Hang in there.


A fun thought experiment: how would you go about incentivizing the clients who hire web developers to create fast loading pages but still have Javascript functionality? More importantly, who is in a position to incentivize clients who care more about tracking and marketing than performance?


Google has been factoring load times heavily into their search rankings for a while now. While I don't agree with a lot of their priorities, this gives a designer who's so inclined a lot of opportunity for encouraging clients to do the right thing: just show them their site's PageSpeed results, and they'll sober up.


I get your point but it's like saying all of today's problems in the world are due to humankind.

I don't think anyone would fight you on that.


You would be surprised how many legitimately blame the devil or some other evil non-human entity.


What I don't like on current state of server side rendering with hydration is that server still sends initial state data, even thought, it is already contained within initial html. This means, that site is rendered twice on first load. First time from html and second time after initial state (some json) arrives as state atom is usually null at this time. Why can be initial state just reconstructed from html (data-attributes ...).


There are many nuances though. Some data/state may not actually be rendered but used for later. It's also not really accurate to say it's rendered twice, as it's rather hydrated (before hydration this used to be a big problem; sites actually rerendering/flashing when the js loaded/initiated).


>It's also not really accurate to say it's rendered twice

Not sure if we're on same boat here :). By rendered twice I mean:

Let's imagine we use React and some store management like redux.

1. Visit some SSR rendered site. Browser receives html (which represents some initial state of an app - rendered by the server)

2. Browser fetches some json representing initial state (hydration). It's the same state that server used for rendering html.

3. State atom was null in step 1. and changed to some initial state in step 2, which basically means full rerender (second) of an app.


I guess you could write "unrendering" code for each component that transforms HTML into its internal state (if that's even possible) and you'd have to keep that code path up to date with any changes to your rendering code. This sounds really error-prone compared to hydration.


It talks about Google Search and the Google tools pointed to are good at telling you how Google search will react. But, things like the Internet Archive and other search engines are not going to be as good with the Full CSR apps.

For those here who use other systems like DuckDuckGo that makes a difference. I really started to notice this when I visited some sites (not apps) via the Internet Archive to get old versions and found things broken.


> Server rendering has had a number of developments over the last few years.

> Streaming server rendering allows you to send HTML in chunks that the browser can progressively render as it's received.

Hasn't PHP done this by default since, like, forever?


Though no-one (for a rounded-down definition of no-one) uses PHP without front-controllers and templating systems these days, giving up that feature.


> and templating systems

...but last time i checked PHP itself was a templating system. What changed?


PHP is not a templating system, it's a web-centric programming language. It doesn't support any of the features one would want in a modern templating system, such as auto-escaping or tainting of user supplied data, knowledge of the current rendering context, native support for HTML or XML entities, macros, template caching, etc.

All of these features and more have to be implemented as a third party framework for PHP to be effective as a templating system, which is why such systems exist.

Merely echoing HTML strings and wrapping variables in htmlspecialchars() does not a "templating system" make. If it did, then every programming language would also be a templating system as long as it could output a string of characters to the necessary port.


I was thinking about being able to write regular HTML with code <?php ... ?> and variable expansions like <?=$foo?> with uses like having a for loop over stuff (like, e.g., table records) and inside the body including another PHP file that is the HTML to write with <??> tags that reference to the object looped over.

It has been a very very long time since i used PHP for anything serious (or did any web programming at all) but to me that sounds like templates. Perhaps you are referring to something else with the word "template"?


Modern PHP isn't written that way, is the point. For better or worse.


Yes, but my original question was that it can be written that way and since it can written that way, why do you need to implement a template system when the language you are implementing that template system can also function as a template system itself? Isn't it a waste of time and resources?


>why do you need to implement a template system when the language you are implementing that template system can also function as a template system itself?

Because PHP, natively, doesn't provide the features that a modern template system needs to be safe, scalable or productive. PHP doesn't recognize HTML or XML as a type, or a "template" as a thing, automatically escape user supplied data (correctly, based on context) or correctly deduce content-type headers or route requests or any number of things one expects of a modern framework or templates. As a PHP application grows in complexity, you will, inevitably, find yourself reinventing frameworks. So might as well just use one that's already been proven in the wild and optimized.

All PHP has is string concatenation. It's "templating system" is blind string concatenation - that's it.


This area desperately does need terminology to be nailed down.. but.. I don't really like the proposed terminology. It's kinda hard to understand and follow. (The meat is in the table at the end of the article.)


If you actively use w3m/lynx. It's actually surprising to me how many websites can still be visited by non-js enabled browser. But I guess I have my bar way too low as I expected pretty much every website that I visit behave like an npm blackhole.


"Trisomorphic Rendering" - looks like someone at Google is working on their promotion doc.


I believe that term had already been used before (I'd at least seen it on twitter)


Did anyone else notice they used the term "uncanny valley" in the article for something totally unrelated to what that means? Or maybe they did mean it but the paragraph didn't elude to that very well. Or maybe they interpreted "uncanny valley" after hearing the phrase into what they thought it meant...


It's not unusual to see it in the context of UI as something that looks interactive but is not / looks done but feels wrong. Like the prank of replacing someone's desktop with a screenshot of itself :)


> Like the prank of replacing someone's desktop with a screenshot of itself :)

That's a fake, not the uncanny valley


Do Google developers follow Google's advice?


Not always. We do work with a lot of teams internally though.


The Android team does not follow their own advices, so...


>android apps

Nah


Yes?


React and similar JS frameworks were created by a walled-garden companies that do not want most their content indexed by Google. Use it on applications and content you don't want indexed with search engines.


I'm working on a site right now that doesn't do any SSR—it's just an index.html with a bundle.js on it. Google indexes it fine and surprisingly we have great SERPs. 2nd result for "trillion trees"

It is perhaps possible that google indexed it from a previous version that did have html, or they are using a sitemap.

(I've just joined the project, and I intend to get them set up with proper SSR soon.)


I'm not sure I entirely agree - Facebook is mostly login-walled anyway and they developed React to suit the needs of a data intensive SPA. Why not just use a robots.txt to stop crawlers from indexing?


Sarcastically, React server side rendering works much better than the others like Angular.

It's that easy - just one liner to use it in ExpressJS and a server side template system.


In a similar fashion to React itself, it's not that easy once routing, caching, context data, request or query parameters, data fetching and state management, environment variables, sessions etc all come into play.


React has a server-side rendering API. So, I find it difficult to rationalize an argument where FB purposefully launched React to get people to return virtually empty HTML documents from the server (at most, a script tag which "boots up" the page content) in order to make the task of indexing difficult.

Facebook indexes many sites themself for link preview functionality. Preview images, page title, description are all derived from open graph tags.


Did you read the article? There's a whole section on SEO that talks about how it works great with SSR.


Really confused by the terminology. I assume in this context "rendering" does not mean translating an abstract scene description into pixels on screen, but something else.

Can somebody shed some light on this?


"Rendering" in this context is turning an abstract representation of the content of your page (or state of your application) into HTML (either as bytes delivered in an HTTP response from a server, or as DOM nodes created by JavaScript on the client).

For Hacker News, "Server Rendering" is taking the top 25 stories, creating a large string containing all the HTML tags, then serializing that to a byte stream.

A hypothetical "Client Rendering" alternative to HN might be downloading a JSON chunk containing the data about the top stories, then using a JavaScript framework to create DOM nodes in the user's browser that create the same document client-side.


Ah got it, thanks for the concise explanation.


In a web context, rendering is populating html template with data.


Constructing and painting the DOM.


No; "Server Rendering" is creating an HTML document as text, then serializing it to bytes and delivering it in an HTTP response. There's no painting in this scenario.


Read without any JS: https://beta.trimread.com/articles/116

Reduces from 1.2 MB to around 500 KB.


This is great, it really proves that you don't need a lot of styling and interactivity to get a message across.

I'm sounding like my grandpa, but... back in my day, things were fast, simple and they worked Now we got this huge mess of technology just so we can track users, sell ads and annoy our site visitors?


Is it possible to achieve a good point about seo without using ssr?


Yes, juste test how it renders as explained in the article. Well, for Google at least.


Is there another name for "Trisomorphic Rendering"? There are not many hits. It seems like Nolan Lawson coined the term in 2016[1] (see quoted tweet, since deleted).

[1]: https://jeffy.info/2017/01/24/offline-first-for-your-templat...


I tried Google's Mobile-Friendly Test on a web page running Google AdSense and received a "Page partially loaded" warning because the https://googleads.g.doubleclick.net/pagead/ads "page resources couldn’t be loaded". Details as why is because "Googlebot blocked by robots.txt" which links to their robots.txt here: https://googleads.g.doubleclick.net/robots.txt

The page also received a "Redirection error" on https://stats.g.doubleclick.net/r/collect and https://www.google-analytics.com/r/collect

301 and 302 redirects are also returning a redirection error.


This is very disappointing to read. I encourage anyone to compare the architecture and DX of client side rendered apps based on Create React App to Next.js. Next.js is very immature and clunky in comparison.


What part are you disappointed with? They just mention Next.js as an example.

What does "DX" stand for here?


DX means Developer Experience.


Who writes this stuff?


I don't know why I am getting downvoted, I literally am curious who writes these google-hosted documents!

(I also do find this one to be not very well copy-edited/written).


I didn't downvote you, but I guess it's because:

a) From the way your comment is written, it's pretty clear that you actually just want to say what's between the brackets in your second comment. It comes accros as passive aggressive and doesn't contribute anything.

b) The page mentions who have written this article. The names, positions at Google and pictures are all there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: