Hacker News new | past | comments | ask | show | jobs | submit login

I am not protesting the idea that a site should work without JS - quite the opposite, I accept that active content and user interaction requires JS (as much as I loathe the language).

What I am protesting, however, is that graceful degredation doesn't happen. The entire site either breaks or blocks you from viewing static content unless you enable JS. Why am I prevented from viewing a product's information page before I decide whether or not I want to add it to the cart? Why does a news article render as 3 columns of disjointed text that are unreadable until they're assembled? Why don't images load at all unless I permit a CDN to shove a load of JS into my browser for fancy slideshows, zooming, whatever, rather than just placing a small static image there so I can decide if I want to zoom into the dynamic larger image?

The worst ones are those that pop up a banner or overlay that darkens the page with the loaded article until I enable JS. Like, you have literally proved your page works without JS. There is literally no reason for me to enable it, except for you to load several different analytics modules that chew up my CPU and siphon as much identifiable data as they can get their hands on.




This. on the side,

> banner or overlay that darkens the page

Sometimes with ABP or UBO you can right-click and blow out that <div> which their JS would have cheerfully made invisible. Also, sometimes FF's "reader view" blanks everything but the meat of the article even if the article was hidden. (you probably know that-- for future reference and any passers-by, until the world breaks it all again. vive la révolution)


I'm savvy enough with the dev tools and CSS to set `style="visibility: hidden"` on most overlays, but yeah, I can imagine that trick isn't going to work forever either.


The dynamic dom frameworks (and webpacks) have a tendency to encourage moving everything into javascript. Generally this means that enabling gradual degredation requires building most things twice, once in static form and again for when vue et al have been loaded. Separating the two takes significant time and effort, which also introduces extra surface area for bugs and odd interactions. It is expensive to build this way, and there are other things worth prioritising.

Ultimately all website are optimised for their target audience, if 30% of our users used IE6 then we would (sadly) target that. If users value ease of use on a baseline browser then that is prioritised, and if static information and nojs turned out to be what most users valued highly then we would do that too.


You would think that SSR could take care of the problem and not require one to build twice. Most of these frameworks are very opinionated, if they cared about doing the right thing I don't think it would be hard to extend their opinions to make it so that content is tagged in ways that allow the framework to automatically generate the multiple views needed for graceful degradation.


> an increasingly rapid arms race for engaging design

For us, the arms race is that other one about getting code to execute on people's machines, including the virtual ones, versus preventing that. The mere existence of good code that is worth running makes it a hard problem. More of that just makes it more pressing even if not actually harder.

> Ultimately all website are optimised for their target audience

Ultimately all audiences are being optimized for the ideal website which shares its ideal viewing conditions with websites that are being used against the users (i.e. code runs even though we don't know or explicitly trust the authors or their employers or really anyone around it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: