The vast majority of modern websites could easily deliver their existing functionality (except with better interactivity) with server-side rendering and just enough plain-vanilla javascript to handle ajax calls.
If you don't believe this, I suggest you carefully reexamine your assumptions. A number of them are almost certainly false.
That costs an HTTP round trip, which can easily pay for the cost of tens of kB of JavaScript code on each hit.
Since a properly-optimized web app will have all or most of its JS code set to be cached indefinitely, the cost of the JS code is near zero; basically, just the cache lookup time. Contrast the HTTP round-trip, which you pay on each such interaction.
If you can render something on the client side without reaching back to the server, you should.
> plain-vanilla javascript
XHR and the browser DOM are horrid, inefficient APIs, in terms of lines of code per amount of functionality delivered to the end user.
I recently wrote some jQuery code to prototype a user-requested feature which ballooned to about 3x the size when the person managing the project demanded that it be done without any external dependencies. (It was about 2.7x the lines of code and 3.5x the bytes of code, the difference being that the XHR + DOM lines tended to be longer.)
In the same project, a separate rewrite from plain-old-JS to jQuery cut the code size nearly in half.
Since this plain-old-JS code was inline on the page, the user now pays for it on each page load, whereas an aggressively cached static library pull will be paid for only once as long as the user doesn’t toss the browser cache and re-visits the app often enough to keep the cached JS code in the cache.
Developer efficiency and user efficiency don’t have to be at odds. There’s a wide gray area between the article’s cherry-picked examples and a web app engineered for efficiency.
You're considering network transmission time, by which cached JS is indeed free, but ignoring parse/JIT time and CPU usage as factors. Processors aren't getting any faster, and all that client-side rendering code has a very real cost in terms of compute-bound work for the user's hardware. Processors aren't getting any faster.
The usual answer I've heard here is to use an SPA, but those have problems of their own. They chew up huge amounts of memory if you use tabs, their UX is worse than if you'd just built normal webpages, you have to deal with making it indexable, and so on.
Honestly, I'd be ecstatic if most pages loaded in two or three network round-trips. A cold request takes maybe 100ms when I'm on a cell connection, but JS-heavy pages I've seen tend to have load times measured in seconds.
At the time, your 3 round trips would have cost about a full second, which is roughly the same as the worst case JITting time for jQuery in those same tests. That worst-case result was on a slow Android 2 device.
Cellular network speeds have gone up, but certainly not as much as mobile processor speeds, so if you re-did those tests today with modern networks and devices, it’s probably a net benefit to pull the jQuery once, then JIT it on each page load, as compared to making even a single extra round-trip per page load.
> Processors aren’t getting any faster.
That’s true on the server side as well.
The article talks about externalized costs, but if you just shift the computing burden to the server, how do you pay for that?
You could load the page with more ads, which eats up the network bandwidth and JIT savings you just bought by moving the processing to the server.
Or, you could charge the users more than you currently do, which is economically little different than shifting the computing burden to the users, implicitly requiring them to buy faster mobile devices and better mobile data plans.
Consider also that the number of bugs created per line of code is roughly constant for each pairing of developer and programming language, so who pays for the costs of the extra bugs you’d expect to find in 2-3x more LOC?
TANSTAAFL.
> JS-heavy pages I've seen tend to have load times measured in seconds.
I doubt you’re comparing apples to apples.
There certainly are many very fat JS-heavy pages on the Internet, but what’s your comparison? If the JS-light alternatives aren’t accomplishing the same ends, then it’s not a fair comparison. You can’t compare, for instance, a web IRC gateway app to Facebook, even though you can use both to transmit plain text to another person.
Not that I’m defending Facebook. I’m just pointing out that they’ve got a wholly different thing going on there than the IRC gateway.
Don't even need Ajax anymore. You can render on the fly, and deliver new HTML over websockets. The gap between SPA and classic server-side rendering is getting smaller fast.
The idea in this case is that you'd have a bit of JS that would take care of updating the DOM, binding event handlers (which is specified and taken care of server-side), and so on.
How Phoenix.LiveView would do this exactly is still being worked on, but [Drab](https://github.com/grych/drab) will probably offer some inspiration.
What are you going to do with your AJAX result without client side coding? I don't buy this argument at all. You could ditch AJAX and do everything server-side like the old days (and it might even perform better given how heavy modern sites seem to be!)
AJAX requires client-side coding. I didn't say you didn't need client-side coding; I said you didn't need very much. AJAX (or the equivalent via websockets) is still necessary for partial page rendering and decent interactivity, which I think are still desirable features for the modern web. But of course for strictly static sites like most blogs you don't even need that.
Shove them back into the DOM with little bit of JS? AJAX allows for some legitimate improvements if you need to work around the document model, but these days, it's abused to extreme levels.
It's not that trivial, yeah you stick it back into the DOM somehow eventually. But you might not be getting back full HTML, you might need to modify it by adding some css classes, or event listeners. You might only be getting back data which you generate the DOM from.
Basically you need a full programming language and DOM and events etc like we have now. But I fully agree that it's abused currently.
Full disclosure I contribute to the madness by developing ReactJS applications.
I suspect Phoenix.LiveView will end up taking care of lots of the common use-cases with a relatively small client-side API, as much of the specifics would be handled on the server (including the code that gets triggered by an event). For the bits that really do need it, one would fall back on the 'regular' approach of writing client-side js.
In my experience at least, a ton of the React work I do is quite similar within and across projects, and with many of the 'special bits' I'd be happy to resort to a more standard approach if it allows me to avoid React altogether (not that I hate React, I just love how much simpler everything would be if I can avoid it).
I don't think anybody would argue that server-side rendering could deliver any or all experiences. Clearly even gaming is possible with a thin client (https://parsecgaming.com/).
What you lose at that point is latency. A round-trip to even the closest CDN is going to be vastly slower than rendering the next frame on a GPU with direct access to write to your screen.
It's not that things are impossible server-side. It's that clearly server-side rendering is not the universal answer for more-performant applications.
Some things will always need heavier processing client-side. If your site depends on high frame rates and a GPU, it's not in the "vast majority" category I was talking about, and my argument doesn't apply. My complaint is that far too many web designers building a site for a newspaper or a blog or a retail storefront seem to think they need a framework appropriate for a FPS game.
What I believe it's saying, and accurately so IMO, is there are a lot of so-called "best practices" which are dogmatically followed without stopping to critically think about whether or not they actually make sense in the context in which they're being applied.
My takeaway from "If you don't believe this, I suggest you carefully reexamine your assumptions." is "If you believe something is true, stop and think critically about _why_ you think it's true. Be open to the fact that this process may lead you discover it isn't true after all".
If you don't believe this, I suggest you carefully reexamine your assumptions. A number of them are almost certainly false.