I may get hate for this but watching JavaScript development mature makes me feel like I am taking crazy pills. So much shit was figured out decades ago but it's like everyone is just learning this shit for the first time. I don't know if it is because the standard library is so lacking or that it's been historical a client side only language but it makes me feel completely disillusioned with programming. It's like we're not progressing at all, merely learning that same shit over and over again each generation in a series of cycles.
Or maybe I'm just too old and need to get off my own lawn.
I created and maintain VueJS’s other Server Side Rendered library. Express-vue
I’ve spoken on it at a few talks, and every time I do, the eyes of the audience just go blank when I talk about SSR. It’s like I’m talking in some strange language they don’t seem to understand, where the only thing they get is client side everything.
I get the stupidest questions like “so how do I add in <insert unnecessary bloat for a SPA here>” most of these are about state management on the client. Aka redux, or vuex etc. Which I reply. “You don’t need that. State lives on the server. It’s simple”.
Since having this library and it getting as big as it has, having to defend it and defend SSR. I’ve fully burnt out of JavaScript and frontend all together.
We use my library in production at work, and have started talking about migrating to Nuxt. I’m planning on changing teams before that happen.
Javascript is just hype driven development, of the worst kind.
Thanks for all the work you've done on express-vue. Even though I'm not the biggest fan of vue and only really use it at work, it makes me happy to know there's good libraries out there for it that do one thing well (a trait that's sorely lacking in a lot of JS libs).
the js world is full of people coming from jquery plugin development. they‘ve been thrown into unknown waters by react, angular & co because „javascript“. but they don‘t know shit about everything that happened before their time.
it seems like they‘re very good at marketing though. i came across folks in management naming react as a solution to everything. even to design problems.
Agreed. Useful data in the first payload is what matters for a responsive experience. The kind of engineering team drives whether it is appropriately accomplished with view templates, rendered view frameworks, or some other HTML rendering technology that embeds preloaded data.
Here's a pattern: since your Service Workers won't run the first time a user visits a page, you should send server-side rendered content to the user that also has a line that installs a Service Worker after the page is fully loaded from base HTML with useful data.
The second time the user visits your page, the Service Worker will ideally be installed and will run, intercept the call and download your components separately, when they're needed, and save them on cache, as opposed to let the server render the components to base HTML.
Third time forwards, components are loaded directly from cache and rendered by the Service Workers.
A single page application (SPA) without server side rendering (SSR) doesn't serve "useful data to the user on the first payload". It serves an empty HTML shell with a script tag (or multiple tags) linking to the Javascript app bundle (or multiple bundles). If the app bundling doesn't break up the app into smaller bundles that are asynchronously loaded, the initial JS payload will probably be 10+ MB. At which point the browser needs to download the 10 MB of JS, execute it, and have the JS fill in the HTML with "useful data".
So no, an SPA without SSR doesn't include useful data in the first payload.
There are patterns (prerendering, streaming (https://jakearchibald.com/2016/streams-ftw/#streaming-result...), etc.) other than simply breaking up the JS app bundle into smaller pieces that can alleviate the traditional "SPA concerns" — hopefully you can just reduce its size overall.
A Phoenix + React application I'm working on uses react-stdio to render React components server side and speed up page load time when users get to the site for the first time or reload the page in the browser for whatever reason.
We ended up with a few hundred MB of RAM for Elixir and a few GB for the Node.js servers. Basically our machines are JavaScript application servers with a little bit of Elixir running in a corner. It feels so odd.
10 MB could be an exaggeration on average, but not by a huge amount. Not even trying to find something crazy big, this is the first site I checked because I went to it earlier today and noticed it took a long time to load:
For one example of JS-dumbfuckery, the Patreon embed button is served as a React app, with all of React bundled in dev-mode, weighing in at just over 2mb.
All of that for a red rectangle with the word "Patreon" in the middle.
Adding a (huge) abstraction on top of another (huge) abstraction to solve a problem introduced by the first abstraction is rubbish software engineering. It's one of the oldest anti-patterns in the book.
I get that these libraries provide some utility beyond their core purpose, and that it's a bit more nuanced than A -> B -> A, but I'm highly suspicious of people that recommend these libraries without giving much context behind it.
The industry norm for SPA seems to be moving towards SSR along with code-splitting and offline workers. The main SSR framework for React is NextJS. Vue and others have their equivalents.
Honestly I don't see a reason not to use it. I could write a traditional website using Next/react at the same speed as someone using Django/rails/etc.
Next.js is a front-end framework, right? So how can you build a traditional website using only a front-end framework at the same speed as someone using Django or Rails?
I previously visited nexjs.org and read the docs, and it wasn't clear that Next.js is a full-stack framework. The term "full stack" isn't used anywhere at nextjs.org[1], unlike rubyonrails.org[2].
Furthermore, there are numerous references on the web to Next.js being a front-end framework, including statements made by Next.js maintainers[3]. Do these statements not reflect the current state of Next.js?
If Next.js is a full-stack framework, then this seems to be another instance of JavaScript tools having inadequate documentation. Speaking of which, if Next.js is a full-stack framework that's comparable to Rails and Django, why is the documentation just one page (!) and there's no mention of database migrations or other basic features that are included with every full-stack framework?
Next.js is not really a fullstack framework. It provides a server for static websites but as soon as you want dynamic content you can use any js api framework of your choice.
Apologies for the tone. RoR website does not describe itself as 'full stack', neither is it a full-stack framework in the modern sense since there is no client-side library other than the ajax helpers.
Next.js is not a 'client side framework' in the usual sense, as it encompasses server side rendering, but it doesn't mean you'll be writing your backend logic in it (unless you want to) or that it will include an ORM.
texas https://gitlab.com/dgmcguire/texas is a library that brings development back to the server. You can integration test without headless browsers, get SPA-like UX without writing a line of JS and basically just write SSR apps with only adding a few lines of code to make you apps realtime and SPA-like.
It works by leveraging persistent connections (websockets) first page load is entirely SSR and subsequent updates are SSR, but only by V-DOM diffing html fragments, sending only the minimum amount of patch data over the wire.
Your apps will be faster, your development process will be faster with less code and removing all kinds of complex tooling.
Here's a todoapp that does all rendering SSR https://warm-citadel-23442.herokuapp.com/ - you can see just how fast it is even on a free tier of heroku - open it up in multiple clients and you'll see that all operations are realtime broadcasted too and believe it or not there's only like 3 lines of code you write to make that happen (plus all the code necessary to get a normal SSR app running)
> but only by V-DOM diffing html fragments, sending only the minimum amount of patch data over the wire.
V Dom is used by browser to update your page. The algorithm involved updates only the changed parts instead of the traditional way of updating the whole page. So, I’m not sure how ‘minimum data transfer over the wire’ is involved here. Can you please clarify? Not sure if you meant getting Json data from rest apis
VDOM is just a concept that means representing the DOM in code. Texas keeps an elixir data structure representation on the server as a cache of client state and diffs against that. It then pushes a patch up to the clients that is used to update only the parts of the page that have changed
not true at all. For one 99% of the time people on a realtime updated page will be viewing the same content meaning you can keep a local-only cache that all clients connected to that server can share...that makes the memory footprint negligible...we're talking maybe a few MBs on a typical webapp
secondly as far as scaling goes you only need to broadcast extremely small messages (on the order of bytes, not kilobytes) across servers to tell clients to update themselves using their local caches
it's extremely scale-able
further texas only builds on top of HTTP semantics - meaning it can fall back to a stateless protocol at any point and the user won't even notice other than their app working a bit more slowly (but still working) for a few seconds while the websocket reconnects to a new node
Hi, I just watched through the first two parts of your three part intro to Texas on YouTube.
I noted that you restart the server process each time you make a change to the elixir application and coming from the Clojure universe I have the following question: Is it possible to insetad of restarting the server program each time you make a change to it's source code instead modify the program as it is running?
This is usually how I work on my Clojure programs and I find that doing that which results in near instant feedback while I am programming.
yeah, phoenix already has live-reloading included, I just probably hadn't setup the watchers properly or I was being dumb xD - it's pretty trivial to have everything setup to give instant feedback, though
very much different - turbolinks just uses AJAX to get a full rendering and replaces the body of the DOM with the response - texas can do realtime updates via pushing only patches (far less data over the wire) over a websocket
Given WebSocket is now supported in most browsers, I have been wondering why such technique hasn't been more widely used?
I wish HN allows me to search through all the post I have viewed or favourite, I am pretty sure I asked a similar question or saw something similar with Rails.
very cool, yeah I think there are a few novel concepts that texas brings to the table, but I'm super excited to see more projects are trying to leverage persistent connections to make SSR viable again. I believe SSR is not only viable, but optimal given the right techniques. Hopefully texas is pursuing all the most optimal abstractions to make this work, and I think it is, but I'm excited to see more people thinking about this. Time will tell I guess!
on the demo? No it's not using any compression - just got something working quickly one day to show off the library. The inner workings for the patch messages are undergoing a lot of reworking and I have plans to make different levels of compression available via configuration for the messaging protocol for the patches being pushed over websockets. This is a pretty young library though
the first page load is just a typical SSR webpage though - so compression could be applied just as easily as any other SSR webpage
Likely you're just located somewhere far from the server in Virginia, USA. If you watch the conference talk just a few minutes in I show a gif of the latency from Seattle to a server in California. Most interactions you can't tell at all.
well, considering liveview doesn't exist yet it's not a direct competitor just yet - but yes it'll be solving similar problems as liveview - although last I talked to chris he was taking a much different approach than I am in the implementation details.
I can't say I fully comprehend the SSR vs SPA war.
Both concepts are definitely being misused/overused, one is "dated" and the other "modern", but surely the middle ground where best usage of both can be attained is prefered.
I don't know of any terms that mediate the opposition other than perhaps "hydration"?
This is the key. There are super developed SSR frameworks and super developed SPA frameworks, but getting the two to work together is like pulling teeth. No one tries to write an SSRSPA framework because that would be considered too "opinionated", but maybe that's exactly what we need.
If you're using node.js and redux, then https://github.com/faceyspacey/redux-first-router pretty easily gets you there. Other than turning route changes into redux actions, I don't find it particularly opinionated.
Most of the current SPA frameworks can re-hydrate SSR'd JS without issue, so instead of configuring SSR, just use a headless browser, which is what Roast does.
They aren't mutually exclusive, but they are either a) at odds with each other because the frontend is in javascript and the backend is in Rails or Python or something else, or b) completely in sync because the backend and front end is in javascript, but terrible because the backend and the front end is in javascript. With web assembly on the horizon we will soon have Rust/Crystal/Go-driven unified backend+frontends, and then we can finally exit this dark age of JS being the only front end language.
I have not had the chance yet to play with it but it’s very high in my list, drab an extension library for the Phoenix framework “providing an access to the browser's User Interface (DOM objects) from the server side”.
A friend did an experimental library way back using server sent events for the Yii framework. I don’t think I fully appreciated the idea back then. Admittedly Elixir seems a better fit for this pattern than PHP tho
As someone that prefers python it is unfortunate that the only way to get SSR is to commit to JS but with web assembly I think we will eventually have tools such as React and Redux be translated to other languages.
As far as performance goes js isn't too bad. It's async by default and a lot of effort has gone into making it run fast.
Yeah I actually have few problems with js itself. It's npm and this tendency to have a million 3 line dependencies. In, for example, the ruby gems ecosystem, there just isn't that problem.
I've been using Next.js and SSR with react is dead simple. I use React even when building standard web pages because the React way of doing things makes more sense to me as a programmer than the html/css/js way of doing things.
I like building composable building blocks with self contained state. The old model of separate html, css, and js never worked well for me.
My main complaint with React SSR is that it almost forces on me the requirement of using Node for the backend. I'd prefer to use a language like python.
Is there any way to do use python as the backend? I feel the same way about handling raw HTML/CSS/JS. Either I haven't learned enough about handling those or I should learn React/Angular/Vue, but I do want to stay on python for my backend.
I’m a fan of gatsby and think it’s a sensible approach for many sites. But it has limitations. Most obvious being it essentially reverts to an SPA if you need the user to be logged in before you can render any real data.
We, your users, do deal with it. Constantly. Usually, we deal with it by waiting 30 seconds for your single-page application blog article to load because we only get intermittent 3G reception on the metro.
I don't really care what sort of frameworks you like. Stop building shit webapps!
Problem with your mac? I'm getting 2.23 seconds to open new message window from your link on a maxed out Lenovo X270 running Ubuntu 18.04 via Chromium. Mind you, I'm also on Gigabit internet. So yeah, Gmail is fucking slow.
Trying it out with Chromium, it loaded the direct link after 3.88 seconds. Trying it out with Safari, it loaded the direct link after 6.27 seconds.
Firefox is pretty fast most of the time. Google's web apps are the only things that performs so poorly on it. I don't understand how they can have such a drastic performance disparity between browsers. Maybe they're just not testing enough on Firefox.
Also if you haven't taken a look in the last 6 months, I would. Things have REALLY started to solidify and come together. They are also 2/3rds of the way through Windows support, which is the main barrier right now to 1.0. Nothing else significant is going to change before 1.0 AFAIK, and it's my job to know.
Still better than the node js ecosystem imo. We are using Amber in production, and it's been a pretty good experience. We control the Dockerfile so we aren't pressured to upgrade things the moment a breaking change comes out, but we stay on top of it.
I really don't get this whole SSR thing. You take a framework designed explicitly for dynamic client side code and use it to make a glorified old school framework (insert your choice of RoR, Django, Laravel, etc).
Why does anyone think this is a good idea? You end up re-implementing Rails on a foundation of quicksand and manage to throw the excellent standard libraries that come with those languages out in one stroke.
Is this a result of too much kool-aid? Or have we ended up with hippie bootcampers without a shred of knowledge in high level positions? It's depressing to watch really.
I went to a conference recently where one of the talks was a guy from comcast on 'Progressive Web App performance'.
The two takeaways were that they had decided against ever using SSR (he didnt say why just acted like it was ridiculous), and that through various clever performance tricks they'd managed to get their page load time down from 30 seconds.. to 15.
I can't help feeling that edge rendering is greatness personified ... but that it's perhaps not any more useful than just server side rendering. An edge renderer is still going to have to either access a model/controller held centrally (getting our latency times up again) or use some godforsaken distributed/replicated state that's a bitch to debug. Perhaps an edge renderer can hold open a connection to the central logic and knock a hundred millis or so off the set up time as a result, but none of the proposed models for doing this would seem to support that. They don't seem to support stateful anything.
Having run a platform on RoR that did billions of requests per day, it scales just fine. It’s not as cost effective as Go, Java, Rust, etc., but it’s perfectly scaleable.
Scaling web apps horizontally invariably becomes more about where your data lives, how it gets scaled out, how it’s accessed, how work get processed (jobs), what caching you do, what memory requirements you have.... yada yada. Rails wasn’t ideal, but rarely are you ever working in a pristine “ideal” tech stack once you’ve hit scale and you’re 5 years into a business.
I think both statements depend on the definition of "at scale". I am no expert, but every RoR thing I have ever seen has been tiny in terms of traffic and still performed terribly. Is there anything in the top 100 sites using RoR?
I do think though there is a certain size companies like this reach where they go from innovative to being afraid to change anything lest the viral popularity goes away.
There's quite a distance between "tiny" and "top 100 sites"! Some pretty darn popular sites that use Rails (afaik) are GitHub, Shopify, Hulu, Twitch, and Airbnb.
Not really. The web has become massively centralized with almost all traffic going to a small number of sites. The bottom of the top 100 list is only getting about 50 million pageviews a month, even poorly written sites in slow languages can easily do that on very cheap low end hardware. Pretty much anything not in the top 100 is dealing with a tiny amount of traffic, even a fair bit of the top 100 is. So it is github and twitch, neither of which actually do anything substantial in RoR any more. That seems like a pretty good reason to think RoR's strength isn't high traffic volumes.
This is wrong. GitHub is still a large Rails app. See presentations by the core Rails team people who work there like Eileen Uchitelle and Aaron Patterson.
That's what I am going on. It literally says the RoR stuff is just the basic webpages, everything else has been re-written or was never RoR. Same deal as twitch, the RoR is just the trivial web portion which is mostly cached. All the chat and video is in go.
This may be true of Twitch but not GitHub. GitHub uses https://github.com/libgit2/rugged to bridge between Ruby and Git. As far as I'm aware, they still use Ruby to pull the diffs etc out of Git etc.
The number of use cases that people implement with SPAs is huge while the number of use cases where a SPA is actually better than just rendering some HTML can be counted on one hand.
For those saying to forgo Next.js and to also serve templates from a non-Node server, how are sub-views handled? Same as JavaScript but the AJAX call returns a sub-template and data?
"SimpleSSR: Don't do SSR" is as humorous as "SimpleC++: Don't use C++". I'm guessing it's funny to SSR haters? (Related question: Are "SSR haters" a thing?)
So is the essence of this particular hot-take that it's better to render pages per-request with Ruby or PHP than to serve a rendered result that has no additional per-request overhead?
It's not that simple. (I say this as a JS dev who develops SPAs for a living).
For one, "no additional per-request overhead" is an oversimplification, that is not the case for most SPAs that grow beyond 'small' size, see 'code-splitting'.
Modern SSR involves re-using the same client-side rendering code on the backend (which typically means a Node backend), and modern SPAs may employ some techniques to code-split the client side app, either by-route or other means.
You can make a decent case that for a lot of 'sites' this whole setup is more complex than the server-side rendered frameworks of old mentioned by this site.
> For one, "no additional per-request overhead" is an oversimplification…
That's fair, but setting aside packaging decisions like code-splitting (which aren't unique to SPAs), my point is that SSR is typically used to deliver content and app logic that would otherwise require round-trips to code running on a server. It hardly seems like a huge win for "SimpleSSR".
Or maybe I'm just too old and need to get off my own lawn.