Hacker News new | past | comments | ask | show | jobs | submit login
SimpleSSR: Server-Side Rendering at Scale (simplessr.org)
136 points by momonga on Oct 27, 2018 | hide | past | favorite | 129 comments



I may get hate for this but watching JavaScript development mature makes me feel like I am taking crazy pills. So much shit was figured out decades ago but it's like everyone is just learning this shit for the first time. I don't know if it is because the standard library is so lacking or that it's been historical a client side only language but it makes me feel completely disillusioned with programming. It's like we're not progressing at all, merely learning that same shit over and over again each generation in a series of cycles.

Or maybe I'm just too old and need to get off my own lawn.


You’re not crazy.

I created and maintain VueJS’s other Server Side Rendered library. Express-vue

I’ve spoken on it at a few talks, and every time I do, the eyes of the audience just go blank when I talk about SSR. It’s like I’m talking in some strange language they don’t seem to understand, where the only thing they get is client side everything.

I get the stupidest questions like “so how do I add in <insert unnecessary bloat for a SPA here>” most of these are about state management on the client. Aka redux, or vuex etc. Which I reply. “You don’t need that. State lives on the server. It’s simple”.

Since having this library and it getting as big as it has, having to defend it and defend SSR. I’ve fully burnt out of JavaScript and frontend all together.

We use my library in production at work, and have started talking about migrating to Nuxt. I’m planning on changing teams before that happen.

Javascript is just hype driven development, of the worst kind.


Thanks for all the work you've done on express-vue. Even though I'm not the biggest fan of vue and only really use it at work, it makes me happy to know there's good libraries out there for it that do one thing well (a trait that's sorely lacking in a lot of JS libs).


No, it's not you. I read the title and I immediately understood the context and said to myself- welcome to 1997.


"Isomorphic."


Have you tried templating in Golang or Rust?

Tera templates are rendered during compilation, URL is here if you want to have a look at the syntax: https://tera.netlify.com/

(I'm not affiliated with the project, just think it's rad)


> it's like everyone is just learning this shit for the first time.

Because they are. The field is full of amateurs and under-trained developers who are positively allergic to history and learning.


you‘re not alone.

the js world is full of people coming from jquery plugin development. they‘ve been thrown into unknown waters by react, angular & co because „javascript“. but they don‘t know shit about everything that happened before their time.

it seems like they‘re very good at marketing though. i came across folks in management naming react as a solution to everything. even to design problems.

we have at least two problems now. sjw and js.

it‘s sad.


You are not alone


You are correct. It's why constantly see things like "Show HN: A Javascript until for making strings upper case"


You can't beat serving useful data to the user on the first payload (base HTML), no matter how fast or clever client-side rendering gets.


Agreed. Useful data in the first payload is what matters for a responsive experience. The kind of engineering team drives whether it is appropriately accomplished with view templates, rendered view frameworks, or some other HTML rendering technology that embeds preloaded data.


Here's a pattern: since your Service Workers won't run the first time a user visits a page, you should send server-side rendered content to the user that also has a line that installs a Service Worker after the page is fully loaded from base HTML with useful data.

The second time the user visits your page, the Service Worker will ideally be installed and will run, intercept the call and download your components separately, when they're needed, and save them on cache, as opposed to let the server render the components to base HTML.

Third time forwards, components are loaded directly from cache and rendered by the Service Workers.


This is exactly the pattern I use in https://github.com/PaulKinlan/webgde-deck and whilst is a little harder to mange it's incredibly fast

https://webdev.topicdeck.com/ is an example of it hosted.


FYI I'm getting "542: An error occured with your deployment" when clicking that link!


With a service worker you will often also cache the initial html, so it is then really all dependent on the speed of the code.


Are you arguing for or against SSR here? (Both would include base HTML, no?)


A single page application (SPA) without server side rendering (SSR) doesn't serve "useful data to the user on the first payload". It serves an empty HTML shell with a script tag (or multiple tags) linking to the Javascript app bundle (or multiple bundles). If the app bundling doesn't break up the app into smaller bundles that are asynchronously loaded, the initial JS payload will probably be 10+ MB. At which point the browser needs to download the 10 MB of JS, execute it, and have the JS fill in the HTML with "useful data".

So no, an SPA without SSR doesn't include useful data in the first payload.


There are patterns (prerendering, streaming (https://jakearchibald.com/2016/streams-ftw/#streaming-result...), etc.) other than simply breaking up the JS app bundle into smaller pieces that can alleviate the traditional "SPA concerns" — hopefully you can just reduce its size overall.

For other thoughts and ideas regarding SSR pros and cons, checkout this presentation given at Polymer Summit (http://www.youtube.com/watch?v=wYGoJ8R3nnM&t=5m35s).


If you are serving 10MB of JS, then you have a lot more optimizing to do way before you need to think about SSR.


A Phoenix + React application I'm working on uses react-stdio to render React components server side and speed up page load time when users get to the site for the first time or reload the page in the browser for whatever reason.

We ended up with a few hundred MB of RAM for Elixir and a few GB for the Node.js servers. Basically our machines are JavaScript application servers with a little bit of Elixir running in a corner. It feels so odd.


10mb of JS — Examples?


10 MB could be an exaggeration on average, but not by a huge amount. Not even trying to find something crazy big, this is the first site I checked because I went to it earlier today and noticed it took a long time to load:

https://help.salesforce.com/articleView?id=000232181&languag...

It is simple help article, but instead of serving the actual help article it spends about 2 seconds serving 4 MB of JS and rendering the article.

After that I typed 'app.' into my URL bar and went to the first site that came up, that was 3.9 MB of JS on the login page.

Numbers are for uncompressed JS, including the larger of the inline scripts.


For one example of JS-dumbfuckery, the Patreon embed button is served as a React app, with all of React bundled in dev-mode, weighing in at just over 2mb.

All of that for a red rectangle with the word "Patreon" in the middle.


I was going to say Slack but it "only" transferred 5MB.



I'm seeing a lot of posts about NextJS here...

Adding a (huge) abstraction on top of another (huge) abstraction to solve a problem introduced by the first abstraction is rubbish software engineering. It's one of the oldest anti-patterns in the book.

I get that these libraries provide some utility beyond their core purpose, and that it's a bit more nuanced than A -> B -> A, but I'm highly suspicious of people that recommend these libraries without giving much context behind it.


The industry norm for SPA seems to be moving towards SSR along with code-splitting and offline workers. The main SSR framework for React is NextJS. Vue and others have their equivalents.

Honestly I don't see a reason not to use it. I could write a traditional website using Next/react at the same speed as someone using Django/rails/etc.


Next.js is a front-end framework, right? So how can you build a traditional website using only a front-end framework at the same speed as someone using Django or Rails?


It's not like it's hard to find: https://nextjs.org

Next integrates the server and client code in a single framework, and is made for seamless deployment using Now.


I previously visited nexjs.org and read the docs, and it wasn't clear that Next.js is a full-stack framework. The term "full stack" isn't used anywhere at nextjs.org[1], unlike rubyonrails.org[2].

Furthermore, there are numerous references on the web to Next.js being a front-end framework, including statements made by Next.js maintainers[3]. Do these statements not reflect the current state of Next.js?

If Next.js is a full-stack framework, then this seems to be another instance of JavaScript tools having inadequate documentation. Speaking of which, if Next.js is a full-stack framework that's comparable to Rails and Django, why is the documentation just one page (!) and there's no mention of database migrations or other basic features that are included with every full-stack framework?

[1] https://www.google.com/search?q="full+stack"+site:nextjs.org

[2] https://www.google.com/search?q="full+stack"+site:rubyonrail...

[3] https://github.com/zeit/next.js/issues/1933#issuecomment-300...


Next.js is not really a fullstack framework. It provides a server for static websites but as soon as you want dynamic content you can use any js api framework of your choice.


Apologies for the tone. RoR website does not describe itself as 'full stack', neither is it a full-stack framework in the modern sense since there is no client-side library other than the ajax helpers.

Next.js is not a 'client side framework' in the usual sense, as it encompasses server side rendering, but it doesn't mean you'll be writing your backend logic in it (unless you want to) or that it will include an ORM.


disclosure: I'm the biased author

texas https://gitlab.com/dgmcguire/texas is a library that brings development back to the server. You can integration test without headless browsers, get SPA-like UX without writing a line of JS and basically just write SSR apps with only adding a few lines of code to make you apps realtime and SPA-like.

It works by leveraging persistent connections (websockets) first page load is entirely SSR and subsequent updates are SSR, but only by V-DOM diffing html fragments, sending only the minimum amount of patch data over the wire.

Your apps will be faster, your development process will be faster with less code and removing all kinds of complex tooling.

Here's a todoapp that does all rendering SSR https://warm-citadel-23442.herokuapp.com/ - you can see just how fast it is even on a free tier of heroku - open it up in multiple clients and you'll see that all operations are realtime broadcasted too and believe it or not there's only like 3 lines of code you write to make that happen (plus all the code necessary to get a normal SSR app running)


> but only by V-DOM diffing html fragments, sending only the minimum amount of patch data over the wire.

V Dom is used by browser to update your page. The algorithm involved updates only the changed parts instead of the traditional way of updating the whole page. So, I’m not sure how ‘minimum data transfer over the wire’ is involved here. Can you please clarify? Not sure if you meant getting Json data from rest apis


VDOM is just a concept that means representing the DOM in code. Texas keeps an elixir data structure representation on the server as a cache of client state and diffs against that. It then pushes a patch up to the clients that is used to update only the parts of the page that have changed


VDOM - virtual Dom, right ?

> Texas keeps an elixir data structure representation on the server as a cache of client state and diffs against that.

Sorry, I don’t think that’s scalable. And, in a load balanced environment, you have to have a central server for this purpose


not true at all. For one 99% of the time people on a realtime updated page will be viewing the same content meaning you can keep a local-only cache that all clients connected to that server can share...that makes the memory footprint negligible...we're talking maybe a few MBs on a typical webapp

secondly as far as scaling goes you only need to broadcast extremely small messages (on the order of bytes, not kilobytes) across servers to tell clients to update themselves using their local caches

it's extremely scale-able

further texas only builds on top of HTTP semantics - meaning it can fall back to a stateless protocol at any point and the user won't even notice other than their app working a bit more slowly (but still working) for a few seconds while the websocket reconnects to a new node


That demo on heroku loads shockingly fast for me. I see initial page load is 10kb for the html/css, and then the js loads in after. Nice.


Hi, I just watched through the first two parts of your three part intro to Texas on YouTube.

I noted that you restart the server process each time you make a change to the elixir application and coming from the Clojure universe I have the following question: Is it possible to insetad of restarting the server program each time you make a change to it's source code instead modify the program as it is running?

This is usually how I work on my Clojure programs and I find that doing that which results in near instant feedback while I am programming.


yeah, phoenix already has live-reloading included, I just probably hadn't setup the watchers properly or I was being dumb xD - it's pretty trivial to have everything setup to give instant feedback, though


Is Texas much different to Rails Turbolinks? Or is it like Turbolinks for Phoenix?

PS I'm really happy that you called out "graceful degredation to an application that continues to function in the abscence of any JS.".


very much different - turbolinks just uses AJAX to get a full rendering and replaces the body of the DOM with the response - texas can do realtime updates via pushing only patches (far less data over the wire) over a websocket


Given WebSocket is now supported in most browsers, I have been wondering why such technique hasn't been more widely used?

I wish HN allows me to search through all the post I have viewed or favourite, I am pretty sure I asked a similar question or saw something similar with Rails.

Edit: Found it. ( Not Really similar though)

https://fie.eranpeer.co


very cool, yeah I think there are a few novel concepts that texas brings to the table, but I'm super excited to see more projects are trying to leverage persistent connections to make SSR viable again. I believe SSR is not only viable, but optimal given the right techniques. Hopefully texas is pursuing all the most optimal abstractions to make this work, and I think it is, but I'm excited to see more people thinking about this. Time will tell I guess!


Do you still use HTTP compression? Or is it not necessary since the data you are sending to the browser is pretty small most of the time?


on the demo? No it's not using any compression - just got something working quickly one day to show off the library. The inner workings for the patch messages are undergoing a lot of reworking and I have plans to make different levels of compression available via configuration for the messaging protocol for the patches being pushed over websockets. This is a pretty young library though

the first page load is just a typical SSR webpage though - so compression could be applied just as easily as any other SSR webpage


That's interesting, but the latency is clearly perceptible between each action and the related screen update.


Likely you're just located somewhere far from the server in Virginia, USA. If you watch the conference talk just a few minutes in I show a gif of the latency from Seattle to a server in California. Most interactions you can't tell at all.



I see your library is for phoenix. This seems like a direct competitor to LiveView?


well, considering liveview doesn't exist yet it's not a direct competitor just yet - but yes it'll be solving similar problems as liveview - although last I talked to chris he was taking a much different approach than I am in the implementation details.


I can't say I fully comprehend the SSR vs SPA war. Both concepts are definitely being misused/overused, one is "dated" and the other "modern", but surely the middle ground where best usage of both can be attained is prefered. I don't know of any terms that mediate the opposition other than perhaps "hydration"?


This is the key. There are super developed SSR frameworks and super developed SPA frameworks, but getting the two to work together is like pulling teeth. No one tries to write an SSRSPA framework because that would be considered too "opinionated", but maybe that's exactly what we need.


Doesn't nextjs do this already? You write an SPA that nextjs implicitly also knows how to render server side


Yeah, that's essentially the purpose of Next.js and it does it very well.


If you're using node.js and redux, then https://github.com/faceyspacey/redux-first-router pretty easily gets you there. Other than turning route changes into redux actions, I don't find it particularly opinionated.


I made https://www.roast.io/ because of this.

Most of the current SPA frameworks can re-hydrate SSR'd JS without issue, so instead of configuring SSR, just use a headless browser, which is what Roast does.


Ok, I was confused by your comment and the other person's comment. SSR and SPA are not mutually exclusive things.


They aren't mutually exclusive, but they are either a) at odds with each other because the frontend is in javascript and the backend is in Rails or Python or something else, or b) completely in sync because the backend and front end is in javascript, but terrible because the backend and the front end is in javascript. With web assembly on the horizon we will soon have Rust/Crystal/Go-driven unified backend+frontends, and then we can finally exit this dark age of JS being the only front end language.


I have not had the chance yet to play with it but it’s very high in my list, drab an extension library for the Phoenix framework “providing an access to the browser's User Interface (DOM objects) from the server side”.

A friend did an experimental library way back using server sent events for the Yii framework. I don’t think I fully appreciated the idea back then. Admittedly Elixir seems a better fit for this pattern than PHP tho

https://tg.pl/drab


As someone that prefers python it is unfortunate that the only way to get SSR is to commit to JS but with web assembly I think we will eventually have tools such as React and Redux be translated to other languages.

As far as performance goes js isn't too bad. It's async by default and a lot of effort has gone into making it run fast.


Yeah I actually have few problems with js itself. It's npm and this tendency to have a million 3 line dependencies. In, for example, the ruby gems ecosystem, there just isn't that problem.


I've been using Next.js and SSR with react is dead simple. I use React even when building standard web pages because the React way of doing things makes more sense to me as a programmer than the html/css/js way of doing things.

I like building composable building blocks with self contained state. The old model of separate html, css, and js never worked well for me.

My main complaint with React SSR is that it almost forces on me the requirement of using Node for the backend. I'd prefer to use a language like python.


Is there any way to do use python as the backend? I feel the same way about handling raw HTML/CSS/JS. Either I haven't learned enough about handling those or I should learn React/Angular/Vue, but I do want to stay on python for my backend.


I mean, the way to do it is to have a server for ssr and standard requests and another server for the api written in the language of your choice.


Gatsby.js (and some other recent tools) makes the SSR vs. SPA argument moot. There's no need to pick one when you can have both.

Disclaimer: I'm not affiliated with Gatsby.


I’m a fan of gatsby and think it’s a sensible approach for many sites. But it has limitations. Most obvious being it essentially reverts to an SPA if you need the user to be logged in before you can render any real data.


React SSR is not a problem, and I can get quicker TTFB than I ever remember getting with even a simple RoR app (300+ms yikes).

I can also work faster with the React ecosystem (and find it much more enjoyable) than RoR and SSR makes the end result the same anyway.

People like component-based frameworks and modern JavaScript (and its supersets). Things are different now. Deal with it!


> Deal with it!

We, your users, do deal with it. Constantly. Usually, we deal with it by waiting 30 seconds for your single-page application blog article to load because we only get intermittent 3G reception on the metro.

I don't really care what sort of frameworks you like. Stop building shit webapps!


Your complaint has nothing to do with using React over using other frameworks such as django.

The problem you have is that people are not implementing SSR on their websites and they don't host all of their assets.


some sites are just modern Goldberg machines


You missed the entire point of my comment.


300ms for a simple RoR site? Sounds suspicious.

That said. Even Gmail is on the order of seconds for being useful nowadays...


Gmail has gotten painfully slow over the years. I just ran a quick timing test with a high-end 2017 MacBook Pro on Firefox:

* It takes 2.53 seconds from hitting enter to being able to see my inbox.

* It takes 9.96 seconds from hitting enter and clicking compose for the new message window to pop up.

* It takes a whopping 15.95 seconds from hitting enter until the new message window pops up if you use the the following link: https://mail.google.com/mail/u/0/#inbox?compose=new

Those times are pretty damn terrible. Even worse, this is with a primed cache!


I thought you were taking the piss, then I actually tried it.

13 seconds to the compose window for me. Fuck me...


35 seconds for me from that link. Looks like it got stuck loading my chat contacts, as it appeared in under a second after that populated.


Problem with your mac? I'm getting 2.23 seconds to open new message window from your link on a maxed out Lenovo X270 running Ubuntu 18.04 via Chromium. Mind you, I'm also on Gigabit internet. So yeah, Gmail is fucking slow.


I also have gibabit internet.

Trying it out with Chromium, it loaded the direct link after 3.88 seconds. Trying it out with Safari, it loaded the direct link after 6.27 seconds.

Firefox is pretty fast most of the time. Google's web apps are the only things that performs so poorly on it. I don't understand how they can have such a drastic performance disparity between browsers. Maybe they're just not testing enough on Firefox.


I mean, Google stands to benefit from google apps intentionally being slow on firefox, so who knows.


The frustrating part of this, is they instead claim major accolades and bonus points for how fast chrome is. :(

It would be one thing if I felt like I was getting something fancy for my wait. I don't.


Ha! I was curious why it was loading better for me today. Just recently switched to faster internet.


>People like component-based frameworks and modern JavaScript (and its supersets). Things are different now. Deal with it!

Yo modern Javascripters:

Stop building crappy, slow and/or unreliable SPAs.

Maybe afterwards you could tell everybody to "deal with it".


TTFB vs TTLB is a real issue, if you load the shell instantaneously and then the page sits waiting on XHR and rendering.


TTFB with SSR. Not a shell. It seems from some responses that I wasn’t clear enough, my apologies.


Crystal completely fixes your response time woes


+1 to Crystal! Amber (Crystal MVC) is a very good framework as well... basically Rails, but about 100x faster


That is assuming it will reach 1.0 soon and has a framework that is anywhere close to Rails. ( Lucky and Amber aren't close )


Also if you haven't taken a look in the last 6 months, I would. Things have REALLY started to solidify and come together. They are also 2/3rds of the way through Windows support, which is the main barrier right now to 1.0. Nothing else significant is going to change before 1.0 AFAIK, and it's my job to know.


Still better than the node js ecosystem imo. We are using Amber in production, and it's been a pretty good experience. We control the Dockerfile so we aren't pressured to upgrade things the moment a breaking change comes out, but we stay on top of it.


>Deal with it!

Telling people "we don't like your broken crap" is one way of dealing with it.


The “deal with it” is aimed at developers, not users. The entire point of my comment is there is no difference in the end result when you have SSR.

The user still receives a page of HTML and CSS, and then JavaScript is loaded.


Upcoming Phoenix LiveView is actually pretty viable for SSR -

https://www.youtube.com/watch?v=Z2DU0qLfPIY


I really don't get this whole SSR thing. You take a framework designed explicitly for dynamic client side code and use it to make a glorified old school framework (insert your choice of RoR, Django, Laravel, etc).

Why does anyone think this is a good idea? You end up re-implementing Rails on a foundation of quicksand and manage to throw the excellent standard libraries that come with those languages out in one stroke.

Is this a result of too much kool-aid? Or have we ended up with hippie bootcampers without a shred of knowledge in high level positions? It's depressing to watch really.


I firmly believe defaulting to SSR/old-school is the way to go. And only add dynamic behavior when needed.

But when you do need to add that dynamic behavior... things get hairy.

See this comment for more info: https://news.ycombinator.com/item?id=18289460


I went to a conference recently where one of the talks was a guy from comcast on 'Progressive Web App performance'.

The two takeaways were that they had decided against ever using SSR (he didnt say why just acted like it was ridiculous), and that through various clever performance tricks they'd managed to get their page load time down from 30 seconds.. to 15.


On-demand Server Side Rendering should be killed for 98% of applications.

SSR->static hosting + client framework is the future.


I can't help feeling that edge rendering is greatness personified ... but that it's perhaps not any more useful than just server side rendering. An edge renderer is still going to have to either access a model/controller held centrally (getting our latency times up again) or use some godforsaken distributed/replicated state that's a bitch to debug. Perhaps an edge renderer can hold open a connection to the central logic and knock a hundred millis or so off the set up time as a result, but none of the proposed models for doing this would seem to support that. They don't seem to support stateful anything.


Yeah this is a little too on the nose.


Also probably shouldn’t put RoR as the first item in your list of full-stack frameworks if your title says “at scale”


Having run a platform on RoR that did billions of requests per day, it scales just fine. It’s not as cost effective as Go, Java, Rust, etc., but it’s perfectly scaleable.

Scaling web apps horizontally invariably becomes more about where your data lives, how it gets scaled out, how it’s accessed, how work get processed (jobs), what caching you do, what memory requirements you have.... yada yada. Rails wasn’t ideal, but rarely are you ever working in a pristine “ideal” tech stack once you’ve hit scale and you’re 5 years into a business.


Lots of companies successfully use RoR at scale.


I think both statements depend on the definition of "at scale". I am no expert, but every RoR thing I have ever seen has been tiny in terms of traffic and still performed terribly. Is there anything in the top 100 sites using RoR?


YouTube, No. 2, was still running Python 2.7 as of last year. Python and Ruby are basically the same performance wise.

Instagram is also a Python app, AFAIK. Last year they contributed some memory efficiency improvements to CPython.

Shopify is likely the largest RoR site at 80k requests per second but since it's served as tons of different domains it doesn't really count.


Stripe is also on Ruby. I think we should separate Ruby and Rails in performance comparison. Python and Ruby are definitely on the same scale.

>Shopify is likely the largest RoR site at 80k requests per second but since it's served as tons of different domains it doesn't really count.

And Different DB etc... Last time I said this I got massively downvoted for it. I think Github is possibly the largest RoR site out there.


Twitter was (originally) also built on RoR. Not to mention GitHub is a rails app.


It's also worth noting that Twitter's feature velocity fell off a cliff when they moved to Scala.


Oh interesting, I hadn't noticed that pattern.


I do think though there is a certain size companies like this reach where they go from innovative to being afraid to change anything lest the viral popularity goes away.


GitHub, AirBnb, Bloomberg, Hulu, Basecamp, Goodreads, Groupon, SoundCloud, Kickstarter, ...


There's quite a distance between "tiny" and "top 100 sites"! Some pretty darn popular sites that use Rails (afaik) are GitHub, Shopify, Hulu, Twitch, and Airbnb.


Not really. The web has become massively centralized with almost all traffic going to a small number of sites. The bottom of the top 100 list is only getting about 50 million pageviews a month, even poorly written sites in slow languages can easily do that on very cheap low end hardware. Pretty much anything not in the top 100 is dealing with a tiny amount of traffic, even a fair bit of the top 100 is. So it is github and twitch, neither of which actually do anything substantial in RoR any more. That seems like a pretty good reason to think RoR's strength isn't high traffic volumes.


This is wrong. GitHub is still a large Rails app. See presentations by the core Rails team people who work there like Eileen Uchitelle and Aaron Patterson.


That's what I am going on. It literally says the RoR stuff is just the basic webpages, everything else has been re-written or was never RoR. Same deal as twitch, the RoR is just the trivial web portion which is mostly cached. All the chat and video is in go.


This may be true of Twitch but not GitHub. GitHub uses https://github.com/libgit2/rugged to bridge between Ruby and Git. As far as I'm aware, they still use Ruby to pull the diffs etc out of Git etc.

https://githubengineering.com/how-we-made-diff-pages-3x-fast... indicates they were still doing this as late as 2016.


GitHub? (although as far as I know they are moving partially away from it)


> including Django, Rails, and Laravel.

All dynamic languages.


The number of use cases that people implement with SPAs is huge while the number of use cases where a SPA is actually better than just rendering some HTML can be counted on one hand.


My golang template render in tens of ms. About same time my slow computer browser take for spa.


For those saying to forgo Next.js and to also serve templates from a non-Node server, how are sub-views handled? Same as JavaScript but the AJAX call returns a sub-template and data?


> Is this a joke? Sort of, not really.

"SimpleSSR: Don't do SSR" is as humorous as "SimpleC++: Don't use C++". I'm guessing it's funny to SSR haters? (Related question: Are "SSR haters" a thing?)

So is the essence of this particular hot-take that it's better to render pages per-request with Ruby or PHP than to serve a rendered result that has no additional per-request overhead?


It's not that simple. (I say this as a JS dev who develops SPAs for a living).

For one, "no additional per-request overhead" is an oversimplification, that is not the case for most SPAs that grow beyond 'small' size, see 'code-splitting'.

Modern SSR involves re-using the same client-side rendering code on the backend (which typically means a Node backend), and modern SPAs may employ some techniques to code-split the client side app, either by-route or other means.

You can make a decent case that for a lot of 'sites' this whole setup is more complex than the server-side rendered frameworks of old mentioned by this site.


> For one, "no additional per-request overhead" is an oversimplification…

That's fair, but setting aside packaging decisions like code-splitting (which aren't unique to SPAs), my point is that SSR is typically used to deliver content and app logic that would otherwise require round-trips to code running on a server. It hardly seems like a huge win for "SimpleSSR".


Was the page edited after it was submitted here? It doesn't say "don't do SSR", or imply it.


It does—that's the point of the page.


Where? It says use one of the many frameworks that do SSR. How is that saying not to do SSR?


NextJS - GG


After reading all the comments here... "TRY ALL THE NEW FANCY FRAMEWORKS"


Let's make SSR great again




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: