I wonder why the article doesn't mention utilizing the browser cache for your static CSS and JS assets instead of introducing a service worker as first measure.
Few years ago I built a shopping site MPA this way and the page transitions were almost not noticable.
EDIT: appears that Chrome suddenly had decided in 2017 to not validate at all on reload anymore, after Facebook had complained to Chrome devs about Chrome being more a drag on their servers compared to other browsers.
> We began to discuss changing the behavior of the reload button with the Chrome team. […] we proposed a compromise where resources with a long max-age would never get revalidated, but that for resources with a shorter max-age the old behavior would apply. The Chrome team thought about this and decided to apply the change for all cached resources, not just the long-lived ones.
> Firefox was quick in implementing the cache-control: immutable change and rolled it out just around when Chrome was fully launching their final fixes to reload.
> Chrome and Firefox’s measures have effectively eliminated revalidation requests to us from modern version of those browsers.
Manage date and time in UTC/Zulu. Append the time zone if that meta is needed and store both the client and server for backend. That way you don't have to deal with time travel and can handle time shifts.
I would say that concurrency is a caching issue once proper locking has been set.
Same (but not a shopping site). Bundle the JS and CSS into one file each and cache it forever (hash in the filename to bust the cache). Then with each page transition there's exactly one HTTP request to fetch a small amount of HTML and it's done. So fast and simple.
+ you need to pair that with immutable, otherwise you are still sending validation requests each reload time, so you are doing more than one HTTP request.
Makes sense as a theoretical problem. Have you ever seen data that suggests it's a practical problem? Seems like one could identify "should be cached forever but wasn't" using etag data in logs.
Facebook did a study about ten years or so back where they placed an image in the browser cache and then they checked how long it was available for… for something like 50% of users it had been evicted within 12hrs
If one of the most popular sites on the web couldn’t keep a resource in cache for long then most other sites have no hope, and that’s before we consider that more people are on mobile these days and so have smaller browser caches than on desktop
Browsers have finite cache sizes… once they’re full the only way to make space to cache more is to evict something even if that entry is marked as immutable or cacheable for ten years
Exactly this. In our case we went so far to cache all static assets. Putting them into a directory with a hash in the name made them still easy to bust.
On a session basis browser caches are effective but on a session to session basis it’s likely the content will get evicted as resources from other sites gets cached - there’s an oldish Facebook study where they placed an image in the cache and after 12hrs it had been evicted for 50% of visitors
Service Workers offer more controls over cache lifetime, and in Chromium (at least) JS gets stored in an intermediate form (which reduces the re-compile overhead on reuse)
Edit: below isn’t true. You could set immutable in cache header and the browser wouldn’t verify.
——
Original comment:
With browser cache, the browser stills need to send a HEAD request to see if the content was modified. These requests are noticeable when networks are spotty (train, weak mobile signals…)
Service Worker could cache the request and skip the head requests all together
Set max-age in the Cache-Control header to a high value and the browser will not need to revalidate. When deploying a new version, "invalidate" the browser cache by using a different filename.
Few years ago I built a shopping site MPA this way and the page transitions were almost not noticable.