I must say, I don't understand the disable JS movement. I browse with JS on, and uBlock Origin to block ads. It's rare that I have any javascript-related problems in my web browsing. On the other hand, I definitely use a number of sites that rely on javascript for useful purposes.
If you're worried about tracking, you can block ads and tracking scripts without disabling javascript. If you're worried about viruses, well, all I can say there is that in my experience and understanding, if you keep your browser updated, the odds of getting a virus via browser JS are exceedingly low. Doubly so if you're not frequenting sketchy sites.
I don't know, it seems to me like advice from a time before security was a priority for browser makers, and high-quality ad blockers existed. At this point, I really don't see the value.
On a great many web sites I have to spend the first 60 seconds on the site clicking on "X" boxes in popups to make them go away. In many cases there are actually several layers of popups obscuring the content, and some are delayed so they only pop up after you start reading the content. No I do not want to subscribe to your mailing list. No I don't want to take your survey. No I do not want to "chat" with your bot-pretending-to-be-a-human. Yes this is my sixth article from you this month but I will not be paying for a subscription, because I have enough of those already. Yes cookies are OK but I want the minimal set. And no I will not disable my adblocker because doing so makes this whole bloody nightmare even worse.
Disabling Javascript makes most of this insulting crap go away, and sometimes it is the only way to read the content.
I personally visit "a great many websites" daily, and rarely have this problem. Maybe it's uBlock doing it's job, or maybe it's the kind of site you go to?
Of course it isn't. Pihole and similar DNS-based blockers do nothing for those kinds of spam which require performing DOM manipulations to be removed. Check uBlock settings after installing it. It has a separate "annoyances" list. I enabled everything in it a few years ago and never had a single problem. It removes all the GDPR banners, "please give us your email" popups, useless "oh I am so original" plates in forum signatures, etc. etc.
Pihole blocks server requests but ublock can do a whole lot more to the page by blocking specific html, css, and scripts. Ublock will prevent YouTube ads, for example, whereas pihole cannot as they come from the same server as the content.
That may have been true whenever you last evaluated your choice, but I find that today, on-load modals are mostly burned into the HTML. With Javascript disabled, rather than not being there, they're instead always there, impossible to get rid of.
>In many cases there are actually several layers of popups obscuring the content, and some are delayed so they only pop up after you start reading the content.
I swear I've had ones that popup as I move the mouse to close the tab.
Not to mention that a host of vulnerabilities were image related a few years back (one of the original rookits exploited a TGA bug).
> uBlock Origin
Honestly, this is the antivirus of the web. I helped my niece set up my old computer for Minecraft today, and she was explaining how her friend had installed viruses (adware, really) 3 times. Every one of those instances was caused by download link confusion for Minecraft mods. Disabling JavaScript isn't going to save you from being tricked into downloading shady software, only an adblocker will.
> one of the original root kits exploited a TGA bug
As a lover of old image formats and the security issues they can cause* this sounds fascinating, but some quick google searches don’t seem to surface what you are referencing. Can you share any more details?
* I once fell into discovering a memory disclosure flaw with Firefox and XBM images
There was the github ddos that existed (iirc) as an image that made a request when viewed (I think it actually ran a script) and a couple smaller botnets that used similar functionality in 2018.
No it doesn't. I use Firefox with ubo and Brave (without extensions) for work and I notice no "running circles around" by either browser. While I'm sure brave's native blocking is faster, in human perception the time difference is essentially nil.
In case the OP wanted to know exactly how Brave's adblock is different from uBlock Origin instead of a link to the marketing page with links to other things like cryptocurrencies:
Brave's browser claims a speedup over AdBlock plus, but was inspired by UBO, so the performance is fairly similar, but is baked into the browser instead of being an extension.
> We therefore rebuilt our ad-blocker taking inspiration from uBlock Origin and Ghostery’s ad-blocker approach.
I use a 6 year old desktop which wasn't that great even when I built it, and a pretty terrible 8 year old laptop, and I don't have any problems with uBlock Origin's performance. I have almost every filter list enabled (except for some regional ones), which results in 153486 network and 173646 cosmetic filters total.
What are you talking about? uBlock Origin is not tied to Chromium, and is not controlled by Google (unlike Brave which is just a fork of Chromium).
Jesus, why does everyone these days automatically assumes that everyone else is using Chrome or Chromium? It's almost as crazy as calling Windows a "PC".
Google controls the frontend APIs uBlock origin is allowed to use, and they have pushed changes numerous times in the past (covered on HN) to intentionally nerf uBlock origin because it hurts their bottom line.
I don't understand this way of thinking. Rust isn't magically fast. You can use most languages to write both performant and lazy code.
Also just so you know, Brave isn't "written" in Rust alone, it is a big software with a lot of parts, including but not limited to a rendering engine, a JS VM and a WASM engine.
The Rust part at most (unconfirmed) would be the glue that connects them together, and I doubt that's where the bottleneck is for most browsers.
Developers have been reimplementing W3C specs in Javascript. Forms don't work, password managers have no idea what's going on, back buttons are hijacked, and sometimes even scrolling is thought to be better handled by a 4MB JavaScript file than by the browser.
I must say, honestly, I don't understand the JavaScript (JS) movement. I am not a web developer, perhaps that is the reason.
IME, 9 times out of 10, web developers are using JS for non-necessary reasons. The user configurable settings of popular browsers make it easy to designate the small number of sites that actually require JS and keep JS disabled for all other sites. They anticipate that the user will not have one default JS policy for all websites. In other words, these web browsers do not expect that all users should just leave JS enabled/disabled for every website, they acknowledge there will be situations where it should be disabled.
However as we all know most users probably never change settings. Doubtful it is a coincidence that all these browsers have JS enabled by default.
The number of pages I visit that actually require JS for me to retrieve the content is so small that I can use a client that does not contain a JS interpreter. Warnings and such one finds on web pages informing users that "Javascript is required" are usually false IME. I can still retrieve the content with the use of an HTTP request and no JS.
There is nothing inherently wrong with the use of JS. It is nice to have a built-in interpreter in a web browser for certain uses. For example, it makes web-based commerce much easier. However, I believe the largest use of JS today is to support the internet ad industry. Without having automatic execution of code by the browser without user review, approval or even interaction, I do not believe the internet ad "industry" would exist as we know it.
I believe this not because I think having a JS or other interpreter is technically necessary, but because these companies have become wholly reliant upon it.
That's why disabling JS stopsa remarkable amount of ads and tracking.
The idea that some remote server needs my processing power to display images and text is so ridiculous because it has become normal. These things don't need my processing power, nor do they have a right to it by default.
I browse the web with JS disabled by default. If I encounter a site that has trouble with that, I enable it for that site until I can determine if it is worth leaving it enabled, which usually means at some point I'll be back there again and need it on.
For the most part, it is a superior experience to what I was seeing before with just an ad blocker. The most noticeable thing about it is probably how many images simply don't load because developers lean on JS for loading and scaling them.
That's not the point of this. This is not about disabling JS as a user of websites.
Heydon is a developer, and an influencer of developers. He's saying: web development is now absolutely obsessed with JavaScript, and it in no way has to be. The basics, HTML, CSS. That's what's important.
I keep it disabled just so sites load faster. Except for 1 or 2 sites, I don't care about anything except for the main text on a page, so it's a waste of time to let a ton of javascript run and load/format things that I don't care about.
I generally visit web pages with JavaScript disabled and them immediately click "reader view". But there are certainly a lot of sites where content is loaded with script, and for those pages I must enabled JS. Q: Does anyone know of a Safari iOS add-in that allows for whitelisting of specific sites to run JS?
No clue on that one, sorry. I’ve been using Purify[1] for ages, but have no clue if it blocks JS - I suspect it blocks only some JS, because my experience of the web isn’t trash while using it, but I do have to disable it sometimes to use the heavily animated navigation systems that some sites implement.
Javascript is a privacy and security nightmare. It's almost equivalent to downloading and silently executing untrusted code on your machine. I say "almost" because Javascript code is virtualized and sandboxed. Though I have no doubt people have already discovered vulnerabilities that enable code to break out of the sandbox.
The cost-benefit analysis of JavaScript usage comes down on the side of enabling it for most people, because of how much of the web is completely broken without it. Sandbox escapes are rare but extremely valuable, and they absolutely exist: https://www.computerworld.com/article/3186686/google-patches...
AFAIK, JavaScript the language has neither privacy nor security issues of "nightmare" level.
> It's almost equivalent to downloading and silently executing untrusted code on your machine.
No it's not. The code is run in a VM, which is run in a browser. So, the code is limited in doing things to the browser, which itself is limited in what it can do to your computer (files and whatnot). So it's not at all like running untrusted code "on your machine".
> I say "almost" because Javascript code is virtualized and sandboxed.
It's virtualized (in the browser) such that all the code will run almost the same on different browsers and chipsets. Again, the browser code is what keeps the computer safe from any code it runs, including CSS code or other VMs it may use, like Java or Flash. Also the OS keeps the computer safe from the browser (or at least it should).
So, no it's not JavaScript that is the boogeyman here.
My understanding is that JavaScript is the primary mechanism used in browser fingerprinting and cross-site user tracking/"analytics". Isn't that a rather large privacy and (personal, if not specifically "cyber") security risk?
The "security features" of popular browsers will never protect the user from the tentacles of internet advertising.
Companies/organizations that author popular web browsers generally rely on the success of internet advertising in order to continue as going concerns; as such, they are obviously not focused on internet advertising, and collection of user data, as a "security threat".
I wouldn't say javascript serves a useful purpose anymore as it's being used to generate entire UIs. It's more a requirement, and performance is taking the heat.
this is not about how "you" do things, It is more about how it should be done! js is almost never provides want I want when I browse, I expect to get some information! I am not on the circus looking for adventures!
people who love to use javascript to prove that they have some kind of taste about how ux etc. I think these people should use some other platform for people who are insterested in show bussniss. think of this as public transportation is designed based on who the driver is that day! is this sound ok to you? do you understand this one?
web is just connection to other people, not a tool for others to bully you just bein' smart about "the code" they wrote is brilliant!
As someone who has JS off by default for a long time (ever since I discovered how much it could remove annoyances, and this was back when SPAs were basically nonexistent) and is thus often subjected to "Please enable JS" messages which more likely than not will simply make me click the back button[1], I am delighted to see this exists --- I've thought of the idea before, but never did anything with it:
[1] I once enabled JS on a site that claimed it would provide "a better experience", and was bombarded with a bunch of ads and other irritations that just made me turn it off again. It was not a "better experience".
I forget about the back button. By default, I always open links in new tabs which means back button has no data. Also, SPAs have hijacked the back button or just broken it completely, so I've been trained to not count on it behaving as expected. There's also mobile experience where getting to the back button itself is often painful after the UI hides navigation from you.
Otherwise, I am 100% in agreement. If a page is so user hostile to not making a friendly non-JS page, the tab gets closed
> By default, I always open links in new tabs which means back button has no data.
I really wish, even if it was an optional setting, browsers would copy the past history of the source tab when you did that. If it hit back in a tab I opened that way, I still want “where I got here from” not “stay here” or “new tab page" or especially “close the tab” (thanks a lot Android Chrome).
Safari does this, at least on iOS. If you open a new tab, then hit back, it just closes the tab and takes you back to the origin tab. If you’ve closed the origin tab, it leaves the active tab open but goes back to what the origin tab was.
I think it depends on how you disable it. Anecdotally, I think it’s “if no scripts are executed on the page, render <noscript> content, but if any scripts at all are executed, don’t”.
Does uMatrix still exist (maintained)? I remember reading it is being discontinued.
(I occasionally used it for curiosity, but found it too tedious in the long term. I have settled on CookieAutoDelete, which seem to address most tracking. Not many seem to run a completely server based fingerprint database.)
It is not maintained but the latest version[1] still works fine, and I will continue to run it until that is no longer the case.
[1] A beta that you can download from the github page. I assume the latest stable version also works fine, but the beta had a few additional bugfixes and features and I haven't encountered any instability.
It logs requests to the site, which is far less invasive than the fine detail of browser fingerprinting and tracking that JS allows: JS can see your mouse pointer's position, how long you spent on each area of the page, which parts of the text you selected, and many many other things.
You can implement similar mouse recording via requests for :hover psuedoelements in CSS. Also, I’m not sure you need JS to get fine fingerprinting and tracking in 2020— https://wiki.mozilla.org/Fingerprinting
Legitimate tools for measuring effectiveness of pages with little in the way of nefarious tracking afaics. Also very useful for replaying user errors/problems.
There are many companies with similar products: Inspectlet, Lucky Orange, probably more. This is a cat that will be quite difficult to put back in the bag.
There are also ways to track you on the front end without JS but I think "This website will not… track you" is just a promise from the not something that followed from the lack of JS anyways.
It could also be a canary in case the site gets bought out, and the new owner wants to implement invasive tracking. If a site has "will not track you" when you visit it, but the next visit it is removed...
Ye olde days of tracking just used invisible .gifs and every click was a different webpage so they just tracked which ones were requested to gain interaction metrics.
JS doesn't have any magic to it, location information is opt-in, but your IP is a much better advertising identifier.
IP behind NAT or CGNAT is not that useful, but many mobile browsers (especially cheap Androids) leak so many trackable details through headers that makes easier to uniquely identify devices/users
back in the '90s I was into connecting to IRC servers using spoofed IP addresses. The way it worked is you told the software what OS you were connecting to (or it would figure it out itself, I can't recall). Each OS had a unique way of generating TCP sequence numbers, which allowed the software to guess which number would come next.
Nowadays OSes have protection for this sort of thing. But I'd imagine you could still fingerprint an OS like that. Combine that with TLS, HTTP, etc. specifics and you could narrow it down quite a bit I bet.
How are you going from guessing TCP sequences to spoofing IP addresses on TCP connections? Did you breeze over a step or am I missing something obvious?
TCP packets contain sequence numbers that must correspond to the ones sent by the other side. This is an issue if you're spoofing packets because you don't receive packets (containing the sequence numbers) from the other side (they will go to the spoofed address, rather than yours). Without the other side's sequence numbers, your replies will be considered invalid, which means you can't complete the handshake[1] to establish a connection. However, if you can successfully guess the sequence numbers, you can complete the handshake and also write arbitrary data to the stream. You still won't be able to receive data, but for simple protocols like irc, it can still be useful eg. connecting to a server and then sending spam to an user/channel.
The mitigations for spoofing sequence numbers might be different for each OS, and that would allow the OS to be fingerprinted. See nmap's OS fingerprinting, for example.
> most people don't run their own resolvers, so at best you're fingerprinting DNS server of the ISP.
That’s not how it’s tracked commonly. Similar to HTTP caches, you can fingerprint visitors by how quickly a domain request resolves for them. Sure, all of this can be mitigated. But you have to even know what to mitigate. And given the most fanatical privacy folks aren’t aware of basic timing fingerprints is a good indicator that no one is mitigating it nearly as well as they might think.
If js were removed from the web tomorrow, the people currently working on tracking protection against js could instead focus on these other mechanisms. Because privacy is an arms race, reducing attack surface is not pointless even if the same tracking can be achieved by other means.
I don’t think JS (or some other runtime) could plausibly be removed from the web on any time scale. People have a (reasonable) expectation that they can do app-like things on a network, and that surface area will find its way to manifest one way or another. At least having it on the web has somewhat of a limiting effect on the entrenchment of the biggest (and worst) privacy offenders, because the barrier to entry is lower than building a wide array of native apps.
There's also a Firefox extension called "Disable JavaScript" that places a JavaScript toggle switch on your address bar and remembers whether you want JavaScript on or off on each specific site that you visit.
Brave browser has a fairly simple process as well. It requires two clicks. Click on the shield on the top right, then click on the "Scripts Blocked" toggle button.
I prefer making all my sites work just as well without JavaScript as with it. This is cool though if you need a quick check to see if JavaScript is enabled.
In which occasion a software ootb like a browser won’t have JavaScript enabled? I’m genuinely curious, the only time I saw something like this was using the distro Kali having no script in Firefox.
I sometimes read HN in a text web-browser, but ultimately most people will have JavaScript enabled.
It's worth saying that most people don't even know what Javascript is, full stop. Weirdly enough my now mother does, but my younger sister doesn't - we now have a generation that has effectively grown-up post-smartphone, which is fascinating to me.
I've also been tinkering more with no framework js, although it's by way of typescript and webpack. And it's really fun, I think there are a lot of cases where you can just opt out of react or bootstrap if you know what you're doing
noscript is sadly, not perfect, but works if you stay 1st party.
A great way to make your browsing better is to disable 3rd party scripts by default and whitelist when needed, but <noscript> fails to work in those conditions.
You're thinking about Noscript the plugin. <noscript> is also an HTML tag that can contain content for use when the browser isn't running JS, but which would yield a cleaner page if not present when JS is running.
if you disable 3rd party javascript, (using ublock origin or others) noscript tags don't trigger because scripting is still technically turned on and noscript tags aren't assigned to the script they compliment so the browser has no way of knowing which ones to run or not run in the 3rd party situation.
I use Purify ad blocker only for its JavaScript blocking option. I can re-enable JavaScript on a per-site basis by going through “Purify Actions” from Safari’s Share Sheet.
I combine this with another ad blocker (Wipr) to block everything else.
If someone knows how to achieve the same on Linux for Chrome and Firefox, I'd love to hear it (browser plugins are a bit of a security and stability shitshow, so non-plugin solution would be preferred, all else being equal).
Ugh, sorry, I misread that. The workaround I would recommend on iOS is just turn javascript off in Safari and install another (Safari-skin) browser like Chrome for the times when you need javascript -- or vice versa, depending on what your default preference is.
Mozilla removed the normal settings toggle a while back, so your options are to manually set javascript.enabled in about:config or use an extension. I agree in general that extensions aren't great, but I trust uBlock.
I think Chrome actually is fine although I don't know of a keybinding for it: Don't they still have a toggle right in the site menu (click the icon to the left of the URL to toggle all kinds of these things)?
In Vivaldi (which I'm 99% certain matches chrome for this part) you click the icon to the left of the URL and then click "Site Settings" at the bottom of the resulting menu where you can toggle a slew of permissions, including javascript.
It's about the best UI I could come up with for this particular knob and the other things adjacent to it.
You can still disable JavaScript via the developer tools. Although it is this session only and auto-reload, so you must tick that checkbox every time you navigate to a new link.
From a security stand point you should at least have uBlock Origin installed for Chrome and Firefox. It can block JavaScript easily. I would trust NoScript as well which provides fine grain control over scripts. I get not wanting any extensions, but blocking scripts will unfortunately never be viable in a user base that doesn’t know what a “script” is...
So running untrusted code on your computer is a bad thing. Well, it should be in a sandbox, but that has many intentional and proably some less intentional holes to do stuff on your machine.
So how do I trace what Javascript is doing on my machine?
Generally mitmproxy gives a feeling what sites the browser talks too. And strace gives often a good feeling what a Linux binary does. But the browser is too big and complicated to read strace output in most cases.
Can anybody recommend a tool to look what Javascript code loaded by a certain page is doing?
> Can anybody recommend a tool to look what Javascript code loaded by a certain page is doing?
Open your browser's developer tools, go to the Script/Debugger tab and have at it. It's just about as obtuse to use as a tool as gdb, but you'll see exactly what it does. Chrome dev tools has automatic formatting of the code, maybe firefox too. But you'll be stuck with shitty variable names if they been mangled. Although you could try http://www.jsnice.org/, I had variable luck with using it.
It would be interesting to have a browser tool that is like strace and you could filter by calls, so you can see exactly where window.navigator is being used for example, or localStorage.setItem. For now best you can do is searching for "navigator" which works, but can be minified/hidden away by coder as well.
> It would be interesting to have a browser tool that is like strace and you could filter by calls, so you can see exactly where window.navigator is being used for example, or localStorage.setItem.
Because JavaScript is dynamic, you can often rebind things like window.fetch to trace what’s going on (store the old copy and replace with a new one that does something before delegating). If you can arrange your shim to be loaded before any other JS, I guess you could implement something like object capabilities for JavaScript?
>Can anybody recommend a tool to look what Javascript code loaded by a certain page is doing?
Almost all modern browsers have debug panels that will list all of the requests made by a page, assets cached, cookies and local db, and of course it's trivial to just view the source of a site and read the javascript if it isn't compressed.
Yes, I am aware of dev tools. I use them to look at network requests or to "steal" my own credentials for curl usage.
Haven't really used the Javascript debugger, but my guess would be completely infeasible to follow everything a random "modern" Web site might do. And as you say some Javascript might be compressed or obfuscated. What I really would want is a somewhat higher level / more filtered approach: Like strace lets me just trace file operations for example.
I'm not sure if it always works on all APIs or browsers, but you can wrap and replace DOM API objects with logging proxies.
Additionally, you can set breakpoints on event handlers and Chromium has deobfuscation built in. You can usually tell approximately what's going on by stepping through the code and watching the variables in local scope.
I’d love to get rid of JS for my personal sites, but I know of no clean and simple way to render dynamic content otherwise. Server-side rendering seems a bit messy. I want a clean REST API separate from the UI.
How can I generate pages with dynamic content easily? Ideally with absolute minimal dependencies.
Depends on what you mean by "dynamic". Some "do something when the user clicks"-style things can be done with CSS. The best example I know of that is https://git-send-email.io/
Abusing HTML form inputs as a way to store application state in DOM is hostile to usability and accessibility - please don't do this for any sites humans need to use. HTML does have progressive disclosure elements built in with `<details>` and `<summary>` which work without JS (I was hoping the demo was showing that or something like it)
Those HTML elements are radio buttons being used to choose between options of OS, mail server, etc. Each radio button is associated with a label that contains the correct text. Selecting a radio button unambiguously makes a `display:none` element no longer be that.
What part of it do you think is bad for usability or accessibility?
Everything you just said - these form inputs aren't related to a form, and assistive technology won't have any clue about the clever CSS selector hacks and what their effects are.
If you want to build something that's nice for humans and machines, look up best practices for this sort of thing - plenty of information is widely available on how to build things in usable and accessible ways (and it's simpler to do it correctly than to use these 'hack'-like workarounds anyway!)
A REST API is purely a server-side issue, so you don't need JS for that (unless your server is Node of course).
You still need Ajax (at least) for truly dynamic content. The bad news is that Ajax requires client-side JS. The good news is minimal Ajax functionality only takes about 30 lines of pure Javascript. The bad news is you will need to write those 30 lines yourself because every JS library is at least 100x bigger than that and packed with cruft you don't need.
My advice is to pretend React doesn't exist, learn Javascript, and write just the Javascript you need and no more.
It does feels messy if you insist on using rest api for your server-side pages (rendering react components in server-side?). Just abandon it and use html template-based system instead, like what most server-side frameworks do (django, rails, laravel, etc).
While it can definitely be a bit messy, out of the box SSG/SSR support on Next.js is trivial (it’s on by default and there’s an indicator that shows when a page will be built static and clear/simple APIs for dynamic SSR). Certainly not minimal dependencies, and you will hit a webpack wall if you need to change the build. That said, I’m working on simplifying this for my own site (Preact instead of React, static Markdown content, separate style loading, hopefully soon automatic partial hydration for whatever JS does end up on the wire) and I plan to break it all out into simple libraries so people can do similar with the same toolchain without the webpack nightmares I’ve experienced.
That's what I mean. It feels much messier than JSX, where "putting strings together" is a clean first-class citizen.
With SSR, you need some component that's aware of every change, and that triggers those re-renders at sensible times (every render takes server resources). This all feels messy, compared to rendering just-in-time on the client-side.
I wonder what use case you're talking about, I don't have a lot of cases where re-render needs to be dealt with, unless I'm making a web app like a multi player photoshop in the browser, in those cases I won't use SSR. For most information based sites, e.g. Hacker News, Twitter, there's no need to track change and re-render, all the data can be ready on the server side like querying database or making request to third party APIs, then it'll just be piecing HTML strings.
From one of my previous comments in another thread (this past week, probably):
> IANAL
> What I think they mean by this is that you shouldn't link to resources on their website to make it seem like they endorse your (product, website, whatever).
That's so crazy it might just work. I checked an article about copyright protection for short phrases[0] and learnt that a court ruled that the text “I may not be totally perfect, but parts of me are excellent” is protected by copyright. It would thus be possible for the author of that phrase to register:
and sue anyone who links to them. Hopefully the author will be so grateful for this insight that they won't sue me for reproducing their copyrighted work in this comment.
How can someone spoof the page/inject ads if the site is served over https?
They would need to have compromised one of the root certificates on your machine to not give you a giant security warning.
In modern browsers there’s not even a button to bypass them (although I know I chrome you can type “this is unsafe” to a hidden input in the error page and it will let you bypass it temporarily).
It's worth noting that this is the personal website of Heydon Pickering, a prolific (and opinionated!) writer and speaker on web design. He's the force behind Inclusive Components [0] and half of Every Layout [1].
Watch his videos. Check out his articles on A List Apart and in Smashing Magazine, among others. Pay attention, he's very thoughtful and you'll probably learn a lot.
Wow, something is wrong with uMatrix. I have uMatrix with 1st party javascript disabled by default. Yet this site says "Please disable JavaScript to view this site." To be sure, I curled the source and hosted it elsewhere, and my browser (firefox 84.0b4 on linux with uMatrix) still runs the JavaScript.
The JS is inline. Neither uM nor uBO static rules block that. The only way to block it is a uBO dynamic rule `no-scripting: $hostname true`, which you'll also need to be able to render the `<noscript>` content anyway.
BTW, you probably want to move off of uM given gorhill has abandoned it in favor of uBO. (I converted all my rules to a mix of uBO dynamic rules for JS and static rules for everything else, except for cookies which I still use uM for because uBO can't manage them.)
Is there a guide for moving from uMatrix to uBO? uMatrix works exactly how I want/expect...
BTW, the site works as expected with my Linux/Firefox/uMatrix setup... the inline scripts are disabled by default and I see the page content. I'm not sure why GP had issues.
2. Block images by replacing them with the built-in 1x1 GIF instead of canceling the request.
3. Disable web workers by setting the CSP worker-src.
4. Override the previous rules by allowing first-party CSS, frames and images. (The @@ means it's an override rule.)
(The fact that my default is to block everything is why the first example I gave above starts with @@ too.)
Web workers can be allowed on a per-site basis by overriding the csp directive with a reset:
@@||foo.com^$csp
Lastly, I have a dynamic rule to allow `<noscript>` tags to be rendered:
no-scripting: * true
Then, for every static rule where I enable JS for a domain, I add a corresponding `no-scripting: $domain false` in the dynamic rules.
It's annoying to have to move between static and dynamic rules when deciding to enable JS on a site, but I'm not sure there's a better way. Neither static nor dynamic rules individually support everything that uM could do - static rules can't block inline JS nor render `<noscript>` content, and dynamic rules can't block every kind of request.
Static rules are also nice in that you can have empty lines and comments and arbitrary ordering of your rules, so it's easier to group rules in sections based on the domain names, add comments, etc. Dynamic rules however are like uM's rules and are forced to be sorted by domain name with no empty lines or comments.
I expected this to be implemented with something like document.body.textContent = "Please disable JavaScript to view this site" on page load. Which would be enough to work. It's actually exactly it, but with the extra precaution of wrapping the entire page inside a noscript tag. Hilarious. I guess it's useful to avoid having a chance to see the content during a flash, before the script executes.
A good contrast with web pages which are not apps telling you "This app requires Javascript to run".
There seem to be a HTTP header[1] you could use. No icon though. And I'm not sure how much the restriction could be evaded by interactive Turing-complete CSS or similar features.
Right click -> View source -> Ctrl+A (select all) -> Ctrl+C (Copy to clipboard) -> Open your preferred text editor (mine is Notepad++) -> Ctrl+V (Paste from clipboard) -> Replace ">" with ">\r" -> enjoy the source reading.
Takes max 10 seconds, on any site. Can you do it in less than 10 seconds using that validator?
Any guesses as to why the source code look the way it does? I was expecting a traditional website inside the noscript tags, but instead I found an obfuscated soup letter between them.
I am not a developer, and I learned how to disable JavaScript in Chrome to read this site, out of curiosity. I then checked a few other sites and learned
* Washington Post no longer has a paywall
* anandtech.com is seemingly unaffected, but tomshardware.com is very different (and less pushy)
* SoundCloud says "Oh no! | JavaScript is disabled | You need to enable JavaScript to use SoundCloud"
* nationalreview.com is broken -- I can only read a few paragraphs in a NR PLUS story, and there's no way to keep scrolling (I can read the article fine in another tab)
* An article I co-wrote, published in a Cambridge University Press journal, is now sans tables and figures, but the console reports no errors or exceptions.
An interesting experiment! Overall, it seems my internet experience is better without JS (but reading an academic article online is way worse).
I disabled it with 4 taps in uBlock Origin, Firefox Android. Open menu, Add-ons, uBlock Origin, the JavaScript icon. Then another tap to reload the page, the back button and the site did show.
The reason you see a blank page is because the website wraps content into a <noscript> tag and by "disabling javascript" it wants you to render <noscript> content, not actually disable javascript. But it's a bad idea in practice. Most websites work better out of the box if you disable javascript, but not render <noscript>, and also disable CSS, and inject some custom style to fix size of embedded icons. I'm not sure if there is a public extension that does that though.
I really don't think that's true. For one thing, anything beyond static content sites are just not going to work. And even content sites could well be unusable without their layout CSS. I really don't see the point in subjecting oneself to such hassle.
I would suggest that most websites work better if you have javascript and CSS enabled, since that's what they were designed for, but use an ad blocker like uBlock matrix to remove the ads.
That's literally the only thing it did until I reconfigured my browser to access it. It's a misuse of `<noscript>` and it's completely unnecessarily intruding on how I use my own computer to access the content. I thought that was the kind of thing people here (especially the anti-JS people) frown upon.
I’ll add that I’m currently building a website, taking great care that it will be fully operational without JS, and progressively enhance components where there’s some interaction I want to provide to people who accept it. I wholeheartedly agree that there’s way too much JS trash on websites, and that it’s all sorts of vectors for abuse. But this kind of hamfisted response is really painfully unhelpful. You don’t have to browse with JS enabled, and you’re welcome to encourage people to make the web work without it again. But making content inaccessible without finding an obscure browser setting isn’t going to appeal to anyone except the people who are already locked and loaded.
My philosophy is: set a good example, make the benefit clear, and communicate with other devs who might not realize the horror show they’re sending down the wire/executing in their users’ browsers. But forcing people to figure out how to disable something increasingly hard to disable before they can even hear you is not good communication.
I only clicked the link because I was hoping there would be a regular site under default config and some special treat with JS disabled. Progressive enhancement. That would have been a clever and compelling execution.
For a personal website for showcasing their works I think requiring JavaScript turned off is a little too far.. I removed JavaScript from my sites to improve accessibility and user experience, but for this I have to go to settings and turn off JavaScript, which is worse user experience than having JavaScript on the site.
In my view, micrototalitarian actions like this are every bit as bad as the ills they seek to cure. Insisting on infringing the liberty of others, before even so much as talking to another person is the very core of what ails our society in this time.
He may have insulted you, but not all insults constitute logical fallacies.
Ad hominem: You're wrong because you're an idiot.
Just an insult: You're an idiot because you're wrong.
Furthermore, concluding that somebody is wrong because they used a logical fallacy is itself a logical fallacy. If I said "2+2=4 because you're an idiot" my reasoning would be fallacious, but to conclude that the answer must therefore not be four is also fallacious.
You don’t have a right to browse someone else’s content.
Calling out your histrionics for what they are is not ad hominem. I’m attacking your statement, not your person.
Further, that’s not some sort of axiomatic law, that’s just a phrase. Even if it was, losers using ad hominem doesn’t mean winners don’t, that’s not how logic works.
micrototalitarian, adjective: relating to a system of government that is a tiny bit centralized and dictatorial and requires a microscopic degree of subservience to the state.
I think it's a conflict between Tor trying to be mainstream as possible and Onions trying to reduce users attack vectors, JavaScript has been used against users on Tor.
How nice! I hope others support this trend so that the web is clearly split into two: (1) the traditional one, document-based, and (2) applications. Both are very useful, but (2) comes at a price not everybody wants to pay.
Isn't this one to steal Facebook session? Didn't look at the code but the fact that you get Facebook page "translate" seems weird. I have seen same link posted on Facebook.
Well, I went through the trouble of disabling javascript and was treated to an utterly mundane blog site with astounding insights worthy of a 22 year-old, such as "if you're a libertarian you're just being exploited by the capitalists, man."
If you're worried about tracking, you can block ads and tracking scripts without disabling javascript. If you're worried about viruses, well, all I can say there is that in my experience and understanding, if you keep your browser updated, the odds of getting a virus via browser JS are exceedingly low. Doubly so if you're not frequenting sketchy sites.
I don't know, it seems to me like advice from a time before security was a priority for browser makers, and high-quality ad blockers existed. At this point, I really don't see the value.