Hacker News new | past | comments | ask | show | jobs | submit login

I don't intend to jump on you, but this claim strikes me as super suspect. There are very few situations outside of the pathological case where the process of querying for data, rendering server-side HTML, and performing a full page reload is going to be faster than local-side processing of a (much smaller) web response based on that same data query followed by a partial DOM rebuild.

And that's not even taking into account perceptive effects; if you sit in a UX lab, having a page flash during rendering makes people think it feels slower. (I have done this test--adding a hook to the end of a web request to hide spinners and draw a white "flash" over a SPA--and the subjective "snappiness" score dropped from an average of 8 to an average of 5.) Nor is it taking into account slow connections and degraded functionality during interruptions, but I'll cut you some slack on that one just because so few people pay attention to it in the first place.

I'd love for you to explain more, but I'm gonna register ahead of time that I kind of expect a "doctor, it hurts when I do this" explanation.




I was a tech lead on a SPA for almost a whole year. You need a whole lot of JavaScript to make an SPA happen. That hurts performance in both the transport and execution stage. Our experience rewriting a high-traffic Ruby app to full-stack React caused TTFB to double.

If the process of loading a new HTML page takes less than 300 ms or even 500 ms, it's negligible in terms of UX. It's easy to achieve this on a traditional site, especially if you use caching.

I may be cynical about it due to my own experiences, but I do not like SPAs anymore. You pay for a highly contextually specific increase in perceived performance with a lot of problems in unexpected places. Happy to go on and on about it if you like.


You don't need "a whole lot of JavaScript", though. React and Redux are under 100k minified. So, yanno, a Retina-doubled JPEG or so. Your own JavaScript need not be particularly large--we're talking text, here, and even text that gets gzipped over the wire. If you're dealing with 2G connections, I can see an argument. If you're not, repeating it for a page blank on every action strikes me as, er, significantly worse.

And, FWIW, I have been party to metrics and user experience that shows that 200ms of latency can double your bounce rate. That 200ms of latency is not going to be coming from downloading react.min.js.

You may be right in some niches, but overall what you describe does not square for applications I build nor applications that my clients build, and I've been doing this pretty broadly for awhile.


The cost is not transmitting the js by wire, but executing it. 100k of js is very CPU heavy.

SSR does help though. But then there's this whole rehydration...


As I said in a prior comment, YMMV and some people love it. What you're saying right now does not ring true for me. But I'm not the kind of person who likes to pressure developers on how they perform their craft. I was just sharing my experience since you seemed to be in doubt about whether or not I was making it up. I'm not.


  > React and Redux are under 100k minified.
And the usable content on the single page is way less than 1KB (I am talking web apps, not content-heavy document pages). With markup you may go to 2-3KB.


Of course. But you're paying the round-trip on every page request, you're page-blanking every time--it just feels worse. For some things (where you expect documents--a news site or whatever) that's not the worst thing in the world, but the modern web is increasingly less document-based and more application-based.

And if initial payload is truly that much of a concern, Preact + Redux is under 5KB minified/gzipped.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: