Hacker News new | past | comments | ask | show | jobs | submit login
Can You Afford It? Real-World Web Performance Budgets (infrequently.org)
214 points by josephscott on Oct 27, 2017 | hide | past | favorite | 128 comments



Do we need all this JS? I’m starting to look back at classic web development where you had a server rendered mvc style app. There was a controller and templates populated on the server. Every page was a full rerender, but it was small html. Proper caching kept the minimal CSS and images on the client for a regular session under 30 minutes. This seems better.

I know we switched because front end experts told us that users found the full refreshes annoying. They said that users really wanted native desktop like seamless change pages. Never did I see proof of these claimed. Is the modern JS based design our cubicle moment? Did we get suckered in by experts making productivity claims with nothing to back them up?


"Do we need all this JS?"

I know at least one user who is asking this same question. He does not believe it is needed.

Is it possible that "what users want" and what developers want may be two different things? Could developers have wants that are unique to developers?

Purely anecdotal but I do not know any fellow users who "want javascript". I know many who do not want a number of common annoyances though. And I know these hassles are in many cases enabled via javascript.

The tactic leveraged against users is to tell them they need javascript for some thing to work. Then users want it. But I think really they just want the thing to work. They have no particular affection for javascript, let alone any knowledge of why they need it.

One thing I have observed over the years the web has existed early 1990's to today is that users can adapt to anything. Whether it was learning keyboard shortcuts on the console and using text-only programs like Pine, or running www searches that took minutes to finish, or communicating with 140 characters or less, or working with tiny touchscreens and horribly slow web pages and mobile apps. Many more examples if I gave it some more thought. From what I have seen, users accept what they are given and find a way to make it work.


How do you notify a user of errors in input without a complete page refresh if you do not use Javascript (ignoring the most basic HTML stuff)? Without at least some scripting capability, you're talking about making the experience worse for the user to the benefit (ease of the request/response model) of the developer.

Don't throw the baby out with the bathwater. It is the responsibility of devs to use technology well. You don't want all of this JS, but you do want some of it.

Too many people are making their assumptions about what is possible and what JS is capable of based upon views of web development that are 5-10 years out of date or based on bad software. Bad software is endemic to the industry, it's not representative of any single tech stack.


You just let the page refresh. It's really not that much different. I am not saying don't use JS for UX improvements mind you. But your comment suggests that any other approach makes no sense. If you have a massive form that has 12 fields then it would be a huge help, but just a couple of text fields? Why replicate the backend validation in client side JS when it's already there on the backend? Then you would have to maintain two code pathways in two different languages in two different locations if you need to update the validation rules. Where simple works, pick simple.

5 years ago was 2013, there were some pretty great things going on in the web in 2013. Don't lose sight of lessons learned by previous work either. A new shiny framework doesn't automatically make it better practice.

While we are reinventing the wheel in client side JS we should make sure to remember that it has probably been done before, software has been around for decades, the web as well, and to think the best of it has only just happened in the last five years is a bit misguided.


This. You can't trust client-side validation anyway, so you need to have some server-side. So why not just let the page refresh, and then add a tiny bit of JavaScript that'll send off the form via AJAX to validation?

Suddenly, the site becomes solid, lightweight, and supports everyone. That's what I think people used to call "progressive enhancement".


I would love to have a way to tell the browser "don't throw away the current DOM, Just apply the differences". It would work nicely in non-js page to make the experience smooth (and would probably work terribly if there was a lot of J's, of course)


This. Every major client lib out there uses some variant of vdom. If this dom diffing was natively supported, browser would request with current dom hash and server could efficiently reply with a diff that both server and client can compute to be the same new hash.

If it's forms. Client calls server with base hash + changes made by client and server replies with. Base hash + new diff to apply. The new diff includes validation changes.

It's trees and diffs all the way down.


This is exactly what React's virtual DOM does. Maybe the future is making the virtual DOM native to the browser?

Bundle React with Chrome? Or how about a browser package manager that could cache and reuse all these common js libraries used on so many sites?


DecentralEyes does just that.


Sounds like PJAX.


Page refresh takes a longer time than client-side validation.


But you still need to do the server side valuation.


That doesn't detract from the value of client side validation, which is that it will be faster from the client's perspective and reduce server load.


Yes, but from the end-user perspective, it's nice to not have to do a round-trip to find out that you entered something wrong.


Fair enough.

Still, validating a form client-side doesn't require a JavaScript framework; it would only require the server to send small snippets of JS for each field with the form.


Needing a page refresh for input validation degrades experience.

"Then you would have to maintain two code pathways in two different languages in two different locations if you need to update the validation rules."

Not necessarily. You can use the same exact code, in the same location written in the same language.


But by how much and does it actually matter? It's also a pretty small example. The problem of complexity outstripping benefits only grows the more you inspect the frontend landscape.

My main point is just that the implication that JS is required otherwise the UX will be unbearable is false. It's also really easy to mess up the UX with JS and I see it constantly. I would take a fast form post over a cumbersome JS powered experience any day.


Well, you don't need a SPA to do that at all. You don't have to do the validation client-side, you can still get the server to do it with some extremely simple javascript.

You can render server-side, then switch it to an ajax post in javascript. On submit, post the form back in ajax (a simple `.serialize()`) and if there's a validation error in the form, return the HTML form back in the response with the validation errors and replace it in the DOM.

Virtually instant feedback, no page reload, no 30 second initial SPA load.

In ASP.Net MVC, for example, this is as trivial as changing the `Layout` property of the page in the controller, the full layout for GETs, no layout in POSTs.

Technically you don't even have to support progressive enhancement, but it's so trivial in most frameworks you may as well.

I believe Rails has a built in turbo feature which essentially does this and more.

Example ts code, though usually put more in to disable double clicks and add a loading spinner:

    onSubmit = (e: JQueryEventObject) => {
        e.preventDefault();
        let $form = $(e.currentTarget);
        $.ajax({
            type: $form.attr("method"),
            url: $form.attr("action"),
            data: $form.serialize()
        })
        .done(() => {
            //do something
        })
        .fail((e) => {
            if (e.status == 400) {
                $(".js-form-wrapper").html(e.responseText);
                this.wireUpControls();
            } else {
                //handle error
            }
        })
    }
And wire it up on initialize with:

    $("#your-form").submit(this.onSubmit);


The comment said "no Javascript", no "no SPA".


> How do you notify a user of errors in input without a complete page refresh if you do not use Javascript (ignoring the most basic HTML stuff)?

Well, it used to be that submitting a form and rendering errors on the server took the same ~100ms that evaluating a javascript framework validation does now, and so it was... not a problem.

If you mean "leaving aside the most trivial html stuff" to include things like html5 inputs, that's the most significant part of what Javascript provides in form validation.


You're glamorizing old web. Old web had the same quality issues, most form submissions were not taking 100 ms and JS validation does not (have to) take 100 ms today.

If you exaggerate the numbers, you can't make any useful comparisons.


What about all of the information that was already in the submitted form?

We can now keep the user's state and inform them of errors in a way that is more difficult with server-ride rendering.


> What about all of the information that was already in the submitted form?

Of course the form gets returned from server validation will all user input prefilled? At least with Django that's the default behaviour.


Does "all user input prefilled" include file uploads?


It's not complicated. Server could offer to use already uploaded file or selecting a new one. Identifying it with session cookie or some kind of file hash.


> Could developers have wants that are unique to developers?

There seems to be a sizable portion of this, and far from limited to the web world.

That said, some of it may also come from the managerial level. This thanks to cargo culting akin to the offshoring fever a decade or two back.

meaning that management is pushing for something to be done in a certain way because they read about some big name corp doing the same, and wanting the company to appear "aggressive" to investors...


I'd say it's half fashion and half business incentives. JS frameworks are fashionable, every kid learns JS and Node now, and this leads to what I see as a big part of programmer population suffering from acute "only-got-hammer-everything-is-nails" syndrome. But business incentives play a big part too. It's easier to make a shitty SPA than a well-engineered site, but all the shittiness gets externalized on the users, so companies don't care - they get to be little quicker to market, but they don't have to pay for the worldwide waste of electricity and bandwidth they cause.

(Not to mention user frustration. Mainstream webdev doesn't give a damn about that either.)


>I do not know any fellow users who "want javascript". [...] They have no particular affection for javascript, let alone any knowledge of why they need it.

Framing it as end users "do/don't want javascript" isn't helpful for analyzing the advantages/disadvantages of javascript. Obviously, end users don't think of it that way.

We could also say that we don't know any users who "want polycarbonate on their face". That's true, but they do want clear 20/20 vision and we just happen to use polycarbonate as the material that combines the attributes of light weight and inexpensive manufacturing compared to glass.

Nobody wants to "ingest fungus poop" either. Except they do because that's what beer is.

Users also don't want a "30 minute barrage of explosions 5 feet from their body", but they do indirectly want that because they would rather drive a car to the store instead of walk. (Some might extract the wrong idea from that example and insist that we shouldn't be dependent on fossil fuels anyway. Well, people also don't want "volatile chemicals" either which is what lithium batteries are. They also don't want a steel tube shoved up their butt which is what bicycles are.)

(To get back closer to the tech world, end users also didn't ask for "HTML" or "want CSS" or "HTTP" either. They don't think in those terms.)

You're right that they don't want "javascript" per se. What they want is a "fluid experience" in the UI. We happen to use javascript to provide that fluidity.

>I know many who do not want a number of common annoyances though. And I know these hassles are in many cases enabled via javascript.

True. But it ignores the benefits to users that javascript enables. Things like airplanes also bring hassles such as noise but that ignores the fact that people wouldn't want to give up flying because planes also enable convenience.

Therefore, people do want javascript -- indirectly. They want autocomplete in amazon and netflix search fields. They want smooth map repositioning in Google Maps without page reloads. They want up/down voting buttons and expand/collapse outlines to work in stackoverflow, reddit, HN without jarring page reloads. Many pages in wikipedia also work better with javascript.

Yes, the abuses and misuses of javascript that hijacks scrolling or javascript that breaks apart a single page article across 20 screens with tracking and ads are annoyances we don't want. However, it shouldn't keep us from having a balanced discussion that includes how javascript improves the web experience for the average user. The uncompromising insistence on avoiding of client-side javascript in favor of server-side page regeneration&reload is hostile to the user. Many web users do not browse from desktops with fast fiber optic connections. They browse from mobile phones with ~500+ round-trip latency.[1]

Like most of you, I'm a "power user" so I should be a prime example of an uber geek who "doesn't need javascript". And yet, I use regex101.com every week that relies on javascript. It would be a total hassle if I had to press "submit" everytime I changed a character to experiment with regexes.

[1] https://serverfault.com/a/573815


I agree with you for content sites or primarily consumption driven apps but I haven't written one of those in 8 years. I'm currently working on an application that we are having to purposefully keep features out of, not because they are slowing the page down, or increasing TTI, but because we risk offending our business partners by building a one for one replacement of their multi-thousand dollar per seat desktop software. There's just no getting around the MB of JavaScript we need for this app. And for this type of application, we are much more concerned about the speed of individual actions on the page than page load. The site still loads faster than the desktop application opens.


This. Set aside the performance of the initial load for a moment. When I interact with the SPAs we’ve built in recent years, I am still shocked how snappy they are as I perform actions throughout the site. Every “pageview” requires the download of a tiny REST API response and comparably tiny HTML fragment. As a user - that experience is a pure delight compared to a full blown page re-load, re-download and re-render with every action.

As a user it infuriates me when I fill out a long form, hit submit and find out I forgot to select the proper “Mr. Mrs. Ms” title and my password, credit card number and PIN have all be blown away.

As a user I’m willing to pay a few more seconds upfront for snappy and seamless interactions as I use the app.

I think the mistake that is more often committed (and where I myself as a user have less patience) is treating everything that can be accessed via URL as an “app”. If the user is coming to just download bytes, don’t build an app. An app is only justified when a user is coming to manipulate data.


> As a user it infuriates me when I fill out a long form, hit submit and find out I forgot to select the proper “Mr. Mrs. Ms” title and my password, credit card number and PIN have all be blown away.

I'm quite sure this is a sign of poor development, regardless of the form being validated on the server or the client. There is no reason for the server not to send back a fully pre-populated form.


Credit card info, passwords...


You're already submitting it to the server. It's no less secure sending it back if there's a problem.


This comment displays a worrying misunderstanding of web security. Sending passwords (or credit card numbers or other scary stuff) in plaintext over the wire is bad. Hash them securely (well, as securely as you can on the client side). Sending them twice is worse. Storing them with the session data on the backend so that they can be sent back to the client to repopulate a form is way worse, since more than one webserver may be handling a session, that plaintext data will have to be stored in a centralized database (e.g. memcached), which is really really bad, even if that storage is temporary.

Some web forms get around this by just sending back the lengths of the individual fields so they can be masked when the form is repopulated, but that's less common and is something that doesn't occur to lots of developers (store the length, but don't send the whole string). Instead, it's easier to follow the mantra of "never store the passwords in plaintext, never send them over the wire in plaintext". Which is why most web forms clear out the sensitive fields when repopulated via the server.

TL;DR it is much, much less secure to do this.


> This comment displays a worrying misunderstanding of web security. Sending passwords (or credit card numbers or other scary stuff) in plaintext over the wire is bad

What are you talking about? Who said anything about plaintext? And no one said anything about sessions either.

If you're submitting to the server in plaintext, then returning in plaintext is no less secure, but you're stupid for not submitting over https which is free.

If you're submitting to the server over https then it's definitely no less secure to return the data.


Think you're a bit confused about how it all works. There is no need to use plaintext, we'll assume we're using https by default. You are already sending the data from the client to the server, I can as well echo it back with a couple of validation errors.

I don't see the security hole in this situation, you do realise that regardless of the obfuscation you see in your password input fields, which by the way only protects from over-the-shoulder peeking, you are willingly sending data to the server and it will end up there unencrypted anyway. Otherwise there wouldn't be possible to do any operation at all.


Security operates in layers. TLS isn't the end-all, be-all for security. It's one layer that prevents a category of attack vectors. Every extra unnecessary sending or storing of sensitive data is an attack vector.


Again, we're not storing anything. And in our case TLS is the layer we're using for protecting our data in-transit. You rely on it as a client when you send the form to the server.

Can you give me an example where POST-ing a form would be secure but responding with the data adds another attack vector? They're both protected by the same "layer" of security, isn't it? How is one more vulnerable than the other?


What he's suggesting is that you never send the password to the server, you only send the hash.

The password itself never leaves the client, and never hits the network.


Password itself is an exception, I agree with you since you don't need the plain value at all, you can get away with comparing hashes. But how about CC data?


Couldn't you do the same? Hash-spaces are segregated by zip codes (hash(zip) is the key to buckets) and then hash the credit card and cvv; compare to verify

Store last 4 digits of cc and mastercard vs visa to provide the "is this the correct card?" form


I'm not sure you can. If a site asks you for CC and CVV data it should already be PCI compliant, say a payment processor. Don't they need the plain text CC and CVV to be able to bill it, if it's a first time user? How can the client send a hash, compare it to what?

I'm really curious about how you see this working, I can't understand how you can forever obfuscate your CC info. I assume at some point the server needs your plain data.


I've personally never worked with payment processors, but I don't see why your intermediate servers would need to have it plaintext

To the payment processor, they'll always have the cc/cvv info, or rather, the hash of it, to verify with. If they don't have it, then I imagine the user doesn't have a valid cc. (How can you have a VISA cc that visa doesn't know about? If its a valid cc, eventually we have to hit someone who knows it...)

For creation of the CC, its entirely done offline atm isn't it? And since its basically just a shared secret, you could do the whole diffie helman physically.. but anyways

So then on your side, as some kind of store, it should be the same as passwords. The payment processor tells you the hash function to use, and you use it client side, and the processor verifies.

Alternatively, you send the user to the someone else (ie redirect to paypal), who does the same thing.

It doesn't seem at all necessary to me that your cc/cvv actually leaves your machine in plaintext. Of course, they may be sending plaintext for some reason, but I don't see an obvious reason why they should


Multi page flows are fine for many data entry style apps which covers a lot of the web.

But for more nuanced software, responsive and rich ui is very important to user uptake.

Examples: Gmail, Facebook, Maps, Slack, SoundCloud, every autocomplete search box everywhere.

If you accept that parts of the page should be interactive then you need to consider the benefit of an isomorphic approach to eliminate code duplication. Usually it's not just a single interactive component.


You can do all of that with very lightweight JavaScript though. 10+ years ago I did autocomplete using a few carefully crafted lines of code and the page still renders just fine.

The problem is JS frameworks make development easier but have high real world costs that are often ignored.


Your second sentence describes engineering trade-offs. The simplest engineering solution to a canyon is to walk around it. The most practical or productive solution over the long run is a different question entirely from the simplest or easiest to implement.


Yup, those are engineering trade-offs. Making your life a little easier by causing users a disproportionate amount of resource waste, frustration and security issues is just shitty engineering. Even though the business types may like it, because externalities are good for profit.


Frameworks are simpler to implement and maintain, that does not make them better for users.


Are JS frameworks really easier than your good ol' fully server-side rendered websites? I doubt that. There are costs for everything, both for using and _not_ using JS. It's silly to think that only using JS adds a cost and nothing else. That's just tunnel vision.


"I did autocomplete using a few carefully crafted lines of code"

Now take 10 developers to reuse that autocomplete in the same project in 10 different places. I bet those carefully crafted lines become 100 little monsters, one without eye other with 3 legs.

Not that they have to be stupid, it is just they think differently.


If they were really carefully crafted, then they likely can be carefully repackaged into a solution that could be included in those 10 different places with zero - or close to zero - runtime overhead on the code sharing mechanism.

But it takes actual thinking - and care - to do that. Something that seems to be in short supply with the current web trends.


Users don't care about development. Sure, for us frameworks are great, for them the trade offs look far worse.


Gmail started out as a multi-page flow, and it worked perfectly fine. Page loads were quick because there wasn't several MB of JS to be interpreted with every page load, just some HTML and CSS.


It’s all about how and where you use it.

Imagine a world without AJAX. This is obviously an opinion, but it would be super annoying if every button I clicked sent a raw POST to the server and refreshed the page.

Do news sites need to be rendered client-side with React + Redux + React Router + all the Babel polyfill garbage? No, I don’t think they do.

But some sites are definitely better in my mind for being SPAs.

The problem is that people pick technology that they like or are familiar with instead of working backwards from the ideal user experience. What we should be doing is picking the minimal set of technology that achieves the UX goals of a project. Instead, you have people loading 200kB of dependencies to render pages with a level of interactivity that could be achieved with 30 lines of vanilla JS. Oh, and then Google, FB, Twitter, and Optimizely tracking scripts.


One important and more subtle advantage of SPAs is that they enforce a clean architectural separation of concerns between the untrusted view layer and the backend API. This decoupling allows the frontend team to iterate quickly on user experience and run experiments without affecting the underlying business logic. This also makes it easier to build alternative interfaces (e.g. mobile apps) that share the same API. Of course these benefits matter only when building more complex applications (not simple content sites).


> Imagine a world without AJAX.

This is my favorite HN quote of the month.


I remember gmail before it used AJAX. In many ways, I miss that interface - it just worked. Sure, I didn't get real time email notifications, but come on, it's email. Sure, there were a lot of page refreshes, but they were super quick since there wasn't several MB of scripts being parsed and run with each load. Just some HTML with a few JS interactions.

They even still offer that interface - check it out. It works remarkably well (and quickly) to this day.


Good single-page app development is just as clean (I've done both), and will be faster over the long term, and scales out better. It keeps all the components cacheable and DRY. The benefits are all about data locality. If your code and data are cached on the client, then the only the minimal subset that has changed needs to be invalidated and updated. If it's only data that has changed, that's all that's retrieved. If it's only code, same goes here. For old-school MVC, if a single character has changed, the whole page is invalidated and must be re-rendered.

The tangible benefits when done right (there's lots of crappy code in every language and system) are greatly reduced user-perceived latency (the system reacts immediately, indicating that something is currently being updated, etc).


It's not a thing of the past. The majority of websites still use this capable and pragmatic approach.

I haven't got any information to support it, but I suspect it was mostly cargo culting from big vendors with big platforms that has made JS Single Page App style sites seem ubiquitous, because everyone in the spotlight is talking about the cool stuff they are doing with it. But in reality they still make up a minority of the web.


I prefer the "classic" MVC design approach. It may not be flashy and some may turn their noses for not being hip, but its easy to develop, test, and scale. I certainly use JavaScript, but I use it when it adds to the workflow of the application.


> They said that users really wanted native desktop like seamless change pages.

This perhaps roots in the behaviour of browsers ten and more years past. Back in the day, you'd often get a "flash of white" during page loads. Today, not any more. A full-reload and a "seven million lines of JS client-side renderer" application are virtually indistinguishable. Chances are that the "full-reload" app is faster.


I think today the web is trying to be two things. 1) classic text documents with links to other text documents. 2) a cross-platform runtime for desktop class apps.

Over time 2) has been increasigly shoe-horned into limitations of 1) and its all a giant cluster fuck now. Leave HTML to linked documents and give us an efficient runtime that we can plug into to deliver desktop class apps written in whatever language we want that works with a standardized runtime and UI language shipped built-in with every browser.

Why the hell cant we build web UIs With a standardized UI markup (not HTML), wire it with whatever lanaguage our company uses, and compile it down to WebAssembly and ship it to clients browsers? Instead we have to write these monstrosities in shitty JavaScript with virtual doms, and hacks everywhere. Madness!


>Why the hell cant we build web UIs like we do desktop apps, wire it with whatever lanaguage our company uses, and compile it down to WebAssembly and ship it to clients browsers? Instead we have to write these monstrosities in shitty JavaScript with virtual doms, and hacks everywhere. Madness!

We can, but probably not now, because WebAssembly isn't quite ready yet. It still lacks garbage collection and efficient DOM access, and the specs aren't even finalized. There are already experimental targets for a few languages like Rust[0] and Lua[1], and WASM targets C/C++ out of the box.

Eventually, you'll probably be able to import arbitrary languages and environments with a few HTML tags, similar to Javascript today, or whatever the Webassembly version of a package/dependency/VM manager turns out to be, then compile and run any application ever made in the browser.

[0]https://news.ycombinator.com/item?id=12619017

[1]https://news.ycombinator.com/item?id=13899829


And then there are media companies that try to shoe-horn 1) into 2) because it gives them more control over the user and allows them to track the user and serve them ads.

If only there was a way to collectively punish this type of behavior.


Like ad blockers and informed browsing habits?


No, not that. Because that has not worked at all and I see no reason for why it should start working tomorrow.


It's probably predicated on a lot of other assumptions. Full refresh might be really annoying if your site is slow and bandwidth heavy. But, for example, squirrelmail is so fast I don't notice the full refreshes.


people want (in ecommerce sites, news sites, actual websites, etc): working back buttons, regular scrolling, open-in-new-tab that works, elements that don't move around, obvious and not-hidden navigation, and also tons of gimmicky/flashy crap.

people want (in desktop replacement spa's where they spend hours and hours): app-appropriate usages of keyboard nav; background tasks with status windows, working back buttons that don't stop the music stream; links that work like links & app-clickables that work like buttons; refresh buttons that refresh (via a page reload) their current full state, but working.


For me, this sums it up [0]. I prefer HN to Redit because it's snappy. But every site I've ever worked on that didn't start off with a good front-end framework wound up being a huge mess on the front-end. The front end generally becomes more and more complex even for simple sites.

"Can we add discussions to this page?" "Great! Now, can they be live-updating?" "Great! Now, can we have a nice little notification counter?" "Great! Now can it be collaboratively edited?"

Eventually, the front end becomes a monstrosity.

When starting out, if you just plan for a complex front end, and choose a well-structured framework, your life as a developer will be much less painful. Yes. It's overkill for a blog. But who among us is really building a simple blog? We're building applications that happen to run in the browser.

[0] https://hackernoon.com/why-does-this-site-require-javascript...


MVC + Turbolinks. I still barely use any JS except frontend libs for fancy utils.


> Do we need all this JS?

The answer to this question is always no.


I wrote about this exact topic last week at https://nickjanetakis.com/blog/server-side-templates-vs-rest....

The TL;DR is, the classic web development model still works amazingly well today and you can get real-time aspects by sprinkling in Javascript where needed.


Regarding full refreshes, I was thinking the other day why do browser vendors not implement them more graciously, as in Pjax style, but a clean reload. The problem with Pjax is that it still keeps state around, when I want to simply remove all state (setTimeouts, setIntervals, websocket connections, events binded to $window etc.)


Even the full page loads aren't necessary with Turbolinks or PJAX. If I had a choice, thats definitely what I'd be using for what I'm working on now at an ecommerce company. But the boss insists on microservices, and the once the advantages that come with a monolith are gone the SPA is just a better approach imo


Correct me if I'm wrong, but don't Turbolinks and PJAX work through JS as well?


Yes, but way less. Turbolinks in particular basically just requires you to add the library, which is very lightweight, and then you get page changes without the bad visuals of the whole screen flashing white, and it's not too difficult to keep it fast too.


Most sites can do with very little JS. Many do well with just a bit, like hacker news, which uses JS for little stuff like voting.

Appier sites, like video, music, games, maps, etc are gonna need a lot more client-side logic to be pleasant to use.


I'm no fan of “modern js-based design”, but I can pay for a lot less server if I can get the client to do all the rendering for me.


Yup.

But the thing to consider is - a lot of things can be pre-rendered and cached. Your server can render it once and serve to many clients. With client-side rendering, each client is rendering the same thing for themselves, possibly slower than on your server because JS is not exactly the sharpest tool in the shed. So you've saved yourself $N in compute but externalized $N * number of users * relative JS inefficiency. This is not a nice thing to do.


I would wager that most people who say this have never actually calculated the cost of server side rendering vs bandwidth costs + users dropping off the site


> I know we switched because front end experts told us that users found the full refreshes annoying. They said that users really wanted native desktop like seamless change pages. Never did I see proof of these claimed.

Even so, you can have that experience with server-side rendering and simpler client-side updating.


This. Several old-style 'server side rendered' pages I made seem superfast with https://github.com/turbolinks/turbolinks .


How would you implement collaborative editing, where users can see each other type instantly?


I think there is (limited) semantic value to single page forms. They also encourage api first design. Is this always worth it? Certainly not. But it does have value.


I think the problem is in all the data-binding of JSON responses to the front end. I don't care what anyone says, it's slow.


Slow in what sense? I mean, for almost all actual use cases, it seems to be quick enough when done right.


If you don't care what anyone says, then I guess there's no point in presenting data.


> 45% of mobile connections occur over 2G worldwide > 75% of of connections occur on either 2G or 3G

this is so important. if you want growth you need to be making a product for growing markets.


No, you need to assess where your customers are and focus there. Just because a market is growing, it doesn't mean that market contains customers for your product. That depends on numerous variables including pricing.

The US will add ~$570 billion to its GDP this year. That's equal to 25% of India's entire economy. An increasingly large part of Europe is back to generating solid growth again. Even Japan is showing signs of life. You don't need to focus on 2G markets if there are few customers there for your products.


> You don't need to focus on 2G markets if there are few customers there for your products.

as someone who used to have a poor internet access, rest assured that I will have no sleep until every review website and comment section has something about how slow your website / app / whatever is. These ones generally don't lag.


Meanwhile, HN loads like a charm and time to interactive seems to be less than 0.5s

Seems like that's what happens when you are too deep in the js bog, and you forget what is below the thick js cover


I recently launched the landing page[1] for my startup and the designer gave us a crazy animated design.

All of my personal sites never required JS and have always worked just fine in Lynx. I didn't want any JS on the landing page (not even for analytics, we can do that server side), so decided to see what HTML + CSS can do these days.

I was pretty blown away with how easily I was able to have a half-dozen animations running in parallel with 3 videos playing and being transformed in a 3D space.

Even better, the page works perfectly fine in Lynx! I recognize that there are times with the modern web where JS is required, but I try my damnedest to minimize it. And modern web standards make it easier in ways that were impossible a decade ago (e.g. no more $.fadeIn()).

[1] https://banter.fm


If you add an overflow:hidden to your content section, you won't have that wonky extra space to the right. That's just a quick fix. I didn't look into why it was happening.


You can even define glsl shaders in css nowadays :x


Yeah, if only all web apps could be text only with almost no interactivity, am I right?


How much of that single-page interactivity is really required, and how much is developers and designers chasing the new shiny?


And works largely the same without Javascript enabled, last I checked. Edit: Yup, can still view pages, up/down/flag in essentially the same flows.


It's 2017. It's shameful that our expectations for web speed should be this low.


I was going to say this but wanted to read the comments first to see if anyone else did.

The constraints on global technology sever for this budget. But I have a personal standard of 200ms for TTI on my projects, and I push hard against any requirements or suggested libs or UI features that broach this.

5 secs? You can get away with that if you’re a global brand that people are already hooked into, I guess. The amazon iOS app is infuriating on a brand new iPhone 8 on gigabit WiFi. And that’s an app where every second of load time is lost revenue. They do not, apparently have any kind of budget concept.

Facebook is similarly awful. They can afford to blow their load budgets because users are already locked into the network. Or at least they think they can.

When I’m bringing a new app to market, speed is my highest priority feature. And the feature backlog is organized by cost of speed, not by difficulty or time to market.

And like another poster said above, most of this is completely unnecessary. Be a dinosaur. Write “MVC” apps with server side rendering and deal with Ajax where you have to for as long as you can. Write the very little bit of js you need, and rock on.

It’s not hip, and it’s not always fun. But it’s fucking fast.


Most if not all C# roles I see these days demand Angular / React experience. It's really hard to talk confidently about that unless having actually done it; you'll get picked apart / walked by. And it had better be covered by Jasmine / Karma as well, and using Typescript. In the cloud. Etc.

Thus we have Resume Driven Development - for better, or worse. As an older person (I got my Vic 20 when I was about 10), I find it equal parts inspiring and frustrating. Having a family / child it's nearly impossible to devote anywhere as much time to this as the younger, unencumbered ones must have. Yet I still love the new stuff.


sorry, but 200ms for TTI on the web? are you measuring the same way that he is? he includes 1600ms just for dns lookup and tls handshake


Sorry I wasn’t clear about that.

The article is using some very heavy constraints on networking and hardware based on a global, mobile audience.

My constraints do not take that into account. But they would only add a max of 200ms on top of that minimum the article is talking about.

Not anything near ~4 secs. 4 secs to do anything on a web app is malpractice.


Front-end developers could solve all of this by just learning the base technologies of HTML, HTTP, CSS, and JS really well and ignoring everything else.

We're at a beautiful point in web development where browsers are finally unified on open standards, and we're all too locked into these outdated JS frameworks to take advantage of it.


It's important to strike a healthy balance between reinventing the wheel and locking yourself into a bulky framework. Whether you are building a personal project, or working on another's time, in the early phases your #1 concern should be speed of development. If a compact, properly modularized, battle-tested tool that completes your desired task is readily available, should you really be wasting your time writing something that's been done a thousand times before?

I've been around the block with popular front-end frameworks. The first one I've truly found to be a joy to use is Mithril.

Unlike React, a framework that's gotten a lot of buzz here lately, it's more than just a view library. There is a routing paradigm in place along with a few other goodies. Yet it all weighs in at <8kb gzipped and generally performs faster than React. It really feels like it hands you just the things you need to get started and steps out of the way.

Check out its performance compared to a few other popular frameworks:

https://mithril.js.org/framework-comparison.html


But you are missing my main point: what are all these frameworks for?

I bet if you asked any front-end developer what the main benefit of their chosen framework is, they will say something like code organization or file structure.

It used to be that you needed a framework to abstract away all the browser incompatibilities. But that was in 2009 -- look up any given W3 standard on CanIUse these days and you will find support across all major browsers.

What if we could just get by without a framework? And just organize our own code? Is that so far-fetched?


It's nice of you to make your web presence work for someone with 400Kbps link with a 400ms round-trip-time (“RTT”), while you're at it, also make sure it works with JavaScript turned off! And browsers with limited CSS support. And is readable on a terminal browser. And 600x400 px resolution.


It's remarkable how much js gets loaded onto a page (mainly due to libraries) vs. how much of its functions are needed for what the web developer is trying to do. There just is not a whole lot of use cases for the big libraries unless you're doing something really intensive like a drawing app.


Even for the example of a drawing app, there is no need for huge libraries. The basic code of putting up a canvas and enabling some drawing on it is pretty small. Most of the apps I've seen with lots of Javascript are still doing it for extras in the UI, not the drawing itself. But many developers would rather grab an existing library than think out the simplest, smallest code that does the job.


Tree shaking should help... eventually.


Next year, in Jerusalem


You can have tree-shaking right now, with rollup.js. You can also have purely transpiled isomorphic component library (no runtime download) running at near vanillajs speeds with svelte.js.


His argument is based on an example where he puts JS in <head>, when it's been a recommendation for ages to put JS at the very end of <body>.


It looks like he needs the JS to load first for the browser to make sense of his custom elements: "If our example document wasn’t reliant on JavaScript to construct the <my-app> custom element, the contents of the document would likely be interactive as soon as enough CSS and content was available to render meaningfully."


Howdy; author of the article here.

TTI would be pushed back regardless of where the script was included so long as it executed for more than 50ms (very likely). I used a trivial document that contains many pathologies I see in traces to illustrate the point, not to suggest what ideal apps will do.


JS at the end of the body is quite an out of date approach these days the recommendation is to use defer or async attributes.

Challenge is will all these approaches is JS will generally still delay Time To Interactive


I'm sure there are niches out there where complex web apps are best built without an SPA framework, but use the right tool for the job! If I'm writing an embedded interrupt routine, I don't use slow clunky C, I use assembly. If I'm writing a complex web application, I use an SPA framework like Vue. If I'm delivering static content, I use minified html and css.

Everything comes with a tradeoff. If you pick the wrong tool, it's a big headache.

Learn to use all the tools.


Most of the requirements for Google's AMP project help bring down TTI by a large margin if you follow them as well.

https://www.ampproject.org/docs/tutorials/create


I think the article makes some valid points, but it's important to remember that PWAs are just getting going. The existing SPA frameworks are still a little crude at the moment, but there is definite progress. For my own app (https://usebx.com), I eventually ended up writing my own front end framework, as existing frameworks at the time felt rather bloated. Now, however, things are starting to look better with the likes of Vue.js etc coming on the scene. I also feel that PWA development will explode once Apple ships service workers with Safari, and that will bring along with it even more motivation to innovate in this space.


Given how much iOS restricts any kind of background thread and how service workers require being run independent of a specific page context, I would be really shocked if iOS literally ever provides support for them.


The service worker spec was deliberately designed to give the browser, not the site, control over resource consumption. The browser is permitted and encouraged to keep them on a tight leash on behalf of the user's experience. See e.g. https://github.com/w3c/ServiceWorker/blob/master/implementat...


It's in development, so we know it's on the cards (https://webkit.org/status/#?search=servi&status=in development)


Actually, I think it is more important to be aware of the issues and handle them than to set strict size limits. For example, a few years ago I lead a redesign project for a mobile web shop and the designer told me he was not allowed to include icons as our CIO had set the limit of the page size to 100k (including all images).

By the end of the project the page size was about 200k, but when the sales were ~20% higher (multi million $ shop) nobody asked about the size limit anymore.

So yeah, page size/load time is important, but please keep reflecting what you are doing. The best load times are no good, if (as a result) the UX is poor.


It's also funny how many of my friends use $TrendyJSFramework for a portfolio site! Personally prefer to have all my content statically served unless I'm building some sort of an interactive single page application.

I run https://discoverdev.io for which I wrote my own SSG framework in python+jinja, I spit out html and push it to netlify's CDN network. It's smooth, I don't have to worry about scaling and it works like a charm! (I could still do a lot of optimisation wrt image compression and minification though)


Just pushed live an app today that I had to cave and add babylon-polyfill so it would work in IE 11 after trying to polyfill a dozen missing es6 runtime features from various sources.

That made the final JS output 25% larger, a 90kb increase from the polyfill library alone, so here's a serious optimization waiting to happen (the natural dying of an older browser).


Couldn't you load babylon only in case the browser lacks the desired features?


I ran into the exact same issue not too long ago.

As an alternative, check out:

https://polyfill.io/v2/docs/


How come no one monitors their speed as much as they should?

SpeedCurve and Calibre are great options listed in that article. Also suggest MachMetrics https://www.machmetrics.com/


i cut my personal site's TTI by 62% after reading this. I just was lazy about performance when there were some real easy wins. i do wish for more guidance on how to do PRPL in React (without necessarily doing full Next.js which is too opinionated for some of the stuff I want to do)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: