Hacker News new | past | comments | ask | show | jobs | submit login
Using const/let instead of var can make JavaScript code run 10× slower in Webkit (github.com/evanw)
237 points by davidbarker on Oct 21, 2020 | hide | past | favorite | 179 comments



Looks like the JavaScriptCore people are on it :)

As an aside, I’ve been seeing a number of articles show up that basically go like “Chrome implements x efficiently but semantically equivalent y poorly, you should use always use x”. I really hope we don’t end up with JavaScript code being written to be optimized well on one particular implementation of a JavaScript engine…it’s interesting to see this in this area specifically because other VMs (e.g. Java) have knobs you can turn yourself but with JavaScript your code runs without much input on your side, so methods to influence the VM are much more specific…


I remember doing this back in 2006 before Chrome even came out. Remember when it was a good idea to cache the .length property because it was O(N) on some browser (I think an early IE), so you'd write your for-loops like this?

  for (var i = 0, len = arr.length; i < len; ++i) { ... }
Hell, the whole premise for React (the virtual DOM) is based on outdated performance advice. Chrome has used a dirty-bit for DOM manipulations since a year or two after React came out; manipulating the DOM is within a factor of ~2-4 of setting a property on a JS object (and much faster than constructing & copying whole new JS objects, which incurs a GC cost), it's just that you really want to avoid interspersed manipulations & queries, which force a page reflow:

  parent.appendChild(document.createElement('div'));   // Fast; ~50 us
  let w = parent.innerWidth;                           // Slow, forces reflow; ~20 ms
  let h = parent.innerHeight;                          // Fast again; no page modifications
  parent.appendChild(document.createElement('div'));   // Fast; just sets dirty bit
  parent.appendChild(document.createElement('div'));   // Still fast; dirty bit already set
But that's the nature of a lot of technical rules of thumb. They get stale as the underlying stack beneath them changes.


> the whole premise for React (the virtual DOM) is based on outdated performance advice. Chrome has used a dirty-bit

This mentality sorta falls into the exact pitfall GP mentioned (i.e. what you really want is for IE/Edge to be significantly faster bringing speed gains for everyone else along the way, not for Chrome to be microscopically faster at the expense of every other browser)

There are real and significant costs in things like random access of .childNodes or bottom-up DOM construction in some browsers. At the height of the whole virtual dom thing, there was a lot of super browser-specific micro-optimizations going on, a lot of it w/ a heavy focus on Chrome, with IRHydra and stuff.

It got a bit ridiculous when I figured out that you could make a significant dent in one of the popular benchmarks at the time not by tweaking JS anymore, but by removing unused CSS rules from bootstrap.css...

But as you said, engines change quickly and I'm not sure it's really worth to be chasing micro-optimizations anymore. We're getting to a stage where the only way to get faster is to simply run less code. Some frameworks are getting the idea (svelte, marko).


Or build better APIs into the browser and optimize them in the C++. IMHO that's the best long-term solution. Moves glacially slow, though, since there's a large engineering cost that needs the coordination of each major browser vendor.

I was a big fan of Polymer/WebComponents in 2014, but they kinda dropped the ball. In hindsight I wish they'd just implemented the React API directly in the browser. (Though depending on how Google vs. Oracle goes in the Supreme Court, maybe that'll become illegal, sigh.)


This worked amazingly well for querySelector (vs jQuery's sizzle and friends at the time), but IIRC, JIT is so good now that there are APIs written in JS _because it's faster to do so than to use AOT_. I recall reading somewhere that the cost of jumping between JS and C++ land was a source of slowdowns with the DOM API, so there's that too.

But overall, I agree with the idea of incorporating the ideas that are working into the platform. There have been various discussion over the years about how virtual dom could work as a native browser API, but unfortunately, the needle hasn't moved there at all.


Dunno if you've looked recently but how do you feel about Polymer 3, which has essentially become lit-element (which is powered by lit-html)?

Just in case you haven't been watching the space at all:

- https://43081j.com/2018/08/future-of-polymer

- https://www.polymer-project.org/blog/2020-09-22-lit-element-...

- https://lit-element.polymer-project.org


I tried it for my project and found lit-html's component composition unintuitive compared to React, Svelte and other leading frameworks.

Also, I've seen the maintainer of lit-element flaming people in comment threads. It doesn't inspire confidence.


Thanks -- correct me if I'm wrong but I thought lit-element is the bit that does components and lit-html (I think the naming is pretty bad here) is just the rendering bit, is that right?

I was already on the fence about what to try next, and Svelte is at the top of the list if I have any problems with lit-element... lit-element just seems a bit lighter and more standards-positive so I wanted to give it a go.

These days though, I'm basically not considering investing in any libraries/frameworks that don't offer SSR with competent hydration. IMO it's the closest we get to the holy grail in frontend -- separation from the backend (which I argue is a benefit), and the SEO-friendliness, nojs-compatability, and speed of server side rendering.

lit-element doesn't have a good SSR story just yet (it's experimental[0]), and Svelte has sapper and ElderJS[2], so it's already ahead there...

[0]: https://github.com/PolymerLabs/lit-ssr

[1]: https://sapper.svelte.dev

[2]: https://github.com/elderjs/elderjs


React would be pretty horrible to implement in the browser. It can't use most of the DOM API: you can't set properties on element, can't add event handlers to elements, create comment nodes. And VDOM and diffing is one of the slower ways to update the DOM.

We should build templating and DOM updates into the browser, but we should do better than React.


All browsers have used a dirty bit for layout for at least the past 2 decades (source: I have worked on browser engines for 2 decades). This is not some new Chrome thing. And all browsers continue to have that same hazard that querying layout-dependent properties forces a synchronous style recalc and layout.

The benefit of React (if any; it has a lot more overhead than vanilla JS+DOM) is that you can't hit that specific pitfall of forcing layout interspersed with DOM or style manipulation.


> And all browsers continue to have that same hazard that querying layout-dependent properties forces a synchronous style recalc and layout.

Could you explain why?

Is the following true even when the parent is already in the DOM?

  parent.appendChild(document.createElement('div'));   // Fast; ~50 us
  let w = parent.innerWidth;                           // Slow, forces reflow; ~20 ms


This is a slightly weird example because it's adding an empty <div>, so you can imagine it could be optimized if it provably has no effect on layout.

But let's imagine instead that the node added to parent was a div with a text child. Or, slightly less obviously, the div is getting added in a document with a style rule of `div { width: 1000px;} In that case, the width could change. So the browser engine's options are:

1. Return a stale value from parent.innerWidth for now, and just lazily update style at the next event loop iteration. 2. Synchronously update style and layout (note, this update is not as expensive as a from-scratch layout), and return the up-to-date value of parent.innerWidth

It turns out that, historically, the earliest browsers with scripting did option (2), and websites came to depend on it. So browsers had to keep on doing it, and so forth. Many folks in the web standards world would like to find away out of this dilemma, where DOM mutation doesn't risk this kind of performance hazard.

You could also imagine an extra bad option: 3. Every time the DOM (or the CSSOM) is mutated, synchronously update layout.

This is super expensive in the face of repeated DOM mutations. Repeated DOM mutations (e.g. adding multiple elements, setting multiple attributes) are way more common than repeatedly getting style/layout-dependent attributes. (3) has the same observable functional behavior as (2), but it's a lot slower, because it will do a lot of unnecessary layouts.

I'm not totally sure if this explains everything you were wondering about, but I hope it helps some.


Yes, thank you.

So browsers normally optimize the first call, but are forced to update at the second one.

As a related note, are you considering the content-visibility property for webkit?


We are aware of the interest and I think we're generally in favor of it, but nothing specific to announce about timing.


> is that you can't hit that specific pitfall of forcing layout interspersed with DOM or style manipulation.

This isn't true though, useLayoutEffects that perform a read/write littered through your code is going to quite easily induce layout thrashing and there's no way of splitting this throughout a tree


OK, this is the part where I have to admit I know a lot less about React than about browser engines. :-)


Isn't it a good habit to store the length of the array regardless of browser implementation? Technically, accessing a variable is simply faster than a property access on an object, and this wouldn't be a case of premature optimization either--just sound coding practice.


They're close enough not to matter on most modern browsers - I suspect V8 actually hoists the field access out of the loop when it compiles it (loop-invariant code motion is a really well-understood compiler optimization at this point). I would definitely put this in the premature optimization bucket.

It doesn't really matter nowadays anyway, because now I write my for-each loops like:

  for (let elem : arr) { ... }
or

  arr.foreach(elem => { ... });
(Well, technically now I write Android & C++ code and do leadership/communication stuff, but I brushed up on my ES6 before getting the most recent job.)


    for (let elem of arr) {}
is actually quite a bit slower than

    for (let i = 0; i < arr.length; i++) {}
because the people who built the JS spec decided that there should be a brand new heap object created every iteration. At the time, there was thought that escape analysis would let them optimize away this object, but from what I can tell, ten years later, engines are really bad at it. Escape analysis is a heuristic, and it needs to be conservative.

And yes, this isn't a micro-benchmark. At least in my application, performance is mostly bounded from GC pauses and collection, not slow code execution. Anything to reduce your GC pressure is going to be a good improvement... but note that modern frameworks like React are already basically trashing your heap, so changing out your loops in an already GC-heavy codebase won't really do much.


It can only hoist the length out of a for loop if it can prove that the length doesn't change, i.e. the array isn't modified. Otherwise it does have to check it each iteration. This is pretty hard in general since any time you call an external function it could potentially have a reference to the array and modify it.

I suspect the length access is just so fast that the difference between hoisting it out and not is immeasurable.


> for (let elem : arr) { ... }

Do you mean:

  for (let elem of arr) { ... }

Unless this is a joke about writing in Java & C++ (for which the colon syntax is valid) instead of javascript.


But

   arr.foreach(elem => { ... });
is still always much slower than a for loop, having the callback call, isn't it?


That very much depends on how good the JIT is, certainly many AOT compilers would understand this pattern and inline the callback, resulting in very similar optimised code.


In cases where order doesn't matter you can avoid you can avoid the array length question all together by decrementing:

    let index = arr.length;
    do {
        index -= 1;
        console.log(arr[index]);
    } while (index > 0);
As a side note whether it takes longer to access a variable or object property is largely superficial depending upon the size of object because it implies creating a new variable on which to store that object property. There is time involved to invoke a new variable just as there is time involved to access an object's property.

As an added bit of trivia in the 1970s a software developer named Paul Heckel, known for Heckel Diff algorithm, discovered that access to object properties is faster than accessing array indexes half the time. That was in C language, but it holds true in JavaScript.


In 2009 we standardized on this form for all Google Search JS for-each loops, because we were literally counting bytes on the SRP:

  for(var i=0,e;e=a[i++];){ ... }
Could result in problems if the array contained falsey values, but we just didn't do that.

Nowadays, like I mentioned above, I'd just do

  arr.foreach((elem, i) => { ... });
Which last time I checked was significantly slower than the for-loop, but I've learned my lesson about trying to optimize for browser quirks that may disappear in a year or two. :-)


>[...] discovered that access to object properties is faster than accessing array indexes half the time. That was in C language, but it holds true in JavaScript.

What are these object properties in C?


Hash map then.


Hm. So you're saying that indexing into a hash map can be faster than indexing into an array? How would this be possible? I mean, under the hood a hash map is going to an array too, which is being indexed based on the hash value...


Hash map properties are randomly accessed from memory opposed to array indexes.


If the code is hot, there is no perf difference in modern JavaScript engines. They will speculate and both your var access and length property access turn into just simple a memory read or a constant.


Any compiler worth its salt should do that optimization for you.


> Technically, accessing a variable is simply faster than a property access on an object

A good compiler will make it so that they are largely equivalent. In C-based languages this is one of the first things a compiler will do, and I am sure that every JavaScript engine does this kind of thing too when possible (actually, it may even have an easier time doing it because it may be able to skip pointer analysis).


If it can proved immutable all modern JS engines will at worst hoist to the earliest point of immutability. With the exciting amount of inlining the modern JS JITs can do they can often do a better job of proving immutability than C[++] compilers.


The premise of React isn’t at all about performance optimization, it’s about components.


Components were an implementation detail, to an immediate-mode rendering paradigm for DOM.

Previously we had been doing retained-mode rendering. We would modify & shuffle around the pieces of the page.

React's components were there to let you re-render the app quickly. When state changed, a new render happened, with new elements emitted. You didn't think of what elements used to be there.

React was about the virtual dom. It was about creating an abstraction to let us not have to regard what was on the page, when we were deciding what is on the page. Incidentally, imo, that involved components, but components, while comprising numerous html elements, serve a very similar function to html elements (especially to web components), and were not, imo, a particularly novel part of React. Yes, there was a lot to their creation & implementation, there was a lot of tech work that went into making Components a thing. But components, to me, are far outshadowed by the vdom, by the performance & speed of a data-system designed to go diff & en-act desired state into the live DOM tree.

In kubernetes world, we'd call the vdom a controller. It reads the canonical state, the component tree, and insures the target DOM machinery is kept up to date & reflects this desired state.


You've got it backwards. The creators of React have clarified this many times as well. The vdom is an implementation detail, components and declarative render are the motivation.

The vdom is interesting, but the right history isn't starting there. The premise was the ability to build bigger, better, more reliable, easier to understand/maintain/refactor apps. Components with declarative render + top down data flow enable that, vdom is just the only way to make it work without being too slow.


I disagree. Not that the vdom is an implementation detail. But to me, the fact that there is an immediate-mode rendering API for the web is far more notable than "Components" take the place of HTML elements.

Everything interesting & notable about components relates to the fact that they are immediate-mode things. Nothing else is particularly notable or interesting or important about them, has parity with what HTML Elements did/do.

Regardless of intent or deliberation, this, to me, is the clear & obvious technical difference that underpins whatever goals the team thought they were shooting for. It's the major characterization of how React was different from other webdev we'd tried. Everything else is downstream of that specific choice, for how to "draw" HTML: immediate-mode.


Well then we agree, you're just calling components immediate mode and throwing away all the other things that make them interesting.

Immediate mode has a specific meaning that isn't really what React is doing. There's no concept of avoiding double buffering, immediate mode re-renders everything every frame while vdom avoids as much work as possible and updates are only triggered by actions.

But there's more interesting to them than just being declarative. A pure render function, a standardized prop boundary, top down data flow, and lifecycles (further improved by hooks, which are basically algebraic effects, which make real hot reloading work), are just as important, and all of those fall under "components".

Enyo, the WebOS framework, actually had a great declarative component model without vdom many years before React.


I'd point out Steven Wittens' Model View Catharsis[1] as some discussion that adopts similar framing to my own, that emphasizes the Immediate vs Retained mode distinction as a core way to analyze different web toolkits.

> Immediate mode has a specific meaning that isn't really what React is doing. There's no concept of avoiding double buffering, immediate mode re-renders everything every frame while vdom avoids as much work as possible and updates are only triggered by actions.

I've shown above others with similar framing to my own. I think you are over-focusing & refusing to see a similarity that is quite present. From a programmer perspective, react is about calling React.render(myJsx, domElement), again and again and again. How much more immediate mode does it get? "Redraw the world" is the premise.

As for double buffering, that's pretty much what the vdom is doing! There's the current buffer, there's the new world, and the vdom machinery is pushing the new buffer onto the old buffer once the render completes.

> But there's more interesting to them than just being declarative. A pure render function, a standardized prop boundary, top down data flow, and lifecycles (further improved by hooks, which are basically algebraic effects, which make real hot reloading work), are just as important, and all of those fall under "components".

These are all good characteristics of React, and part of it's total package that defines it. Interesting, yes! I think I am under-attributing the different feel of components versus where the web was before. A lof of this, feels, to me, like an incident discovery to a bigger phase change, from retained to immediate. I see a lot of those finger prints in other immediate mode rendering places. But it's very new to the web, best I can tell. I need to go back & re-review Enyo. Been a while. Ahh the heady days of two way data-binding!!

To hash up some specific points? Declarative is only part of it. The DOM is declarative. But the DOM is not immediate mode rendering, it's a retained system. The declarativeness feels normal?

Pure render functions are neat to see on the web, yeah. A lot of other immediate mode systems have this, geometry shaders being a large class of systems that often are pure functions.

The prop boundary seems like something the DOM already has & used a lot, if not quite so heavily bounded. This is just properties on elements, only expressed in a slightly different way: that distinction doesn't draw any major note for me, is an interesting twist, but just a re-embodiment of what was. That it's a harder boundary now doesn't do a ton for me- elements have been powered by their properties since DOM1 & it's very normal on Custom Elements.

Top down data-flow again feels like something relatively naturally emergent most scene-graph rendering systems, not unique to React. The web itself is a big top down renderer, always has been. It's weird because I both see tons of parallels between templates (which often nest or accept children or slots) and React components, but also I see that the feel is quite different, that components somehow are different, although I struggle to characterize how they really are and how this has changed webdev.

Not sure how I feel about lifecycle either. Willing to give this one to componenents. Definitely not a very immediate-mode idea, more like something we'd see in a scene-graph though.

[1] https://acko.net/blog/model-view-catharsis/


Yes, the original pitch was you can use components, declarative code and one way data flow and it is not slow becuase we use virtualdom.


Yes true, but why should one favor react over e.g. web-components if it is so much slower?


React and web components are different tools trying to solve different problems.

Web components are primarily about trying to add new virtual HTML tags to "extend the platform".

React and other frameworks are about trying to build interactive applications that require larger-scale UI management, efficiently, on top of the DOM, by defining pieces of that UI as a tree of reusable components (and using techniques that web components don't have available to them).

If I want to add a color picker to an otherwise static HTML page, I might use a color picker web component.

If I want to build a meaningful-sized app, I'd reach for React.


Web components are not built declaratively.


This declarative narrative is more or less a marketing gag, isn't it? Mixing Logic and Markup has always been an antipattern in web development. However, react calls this declarative and this is the new black now?


I’ve been doing web since 1990s and I‘ve always thought spreading a UI element across 3 separate files (often in different locations) was an anti-pattern (or 5+ files In 5 different folders if you want MVC).

React is awesome because it allows feature-aligned separation of concerns (each component has a single job - render everything about a specific element - which is usually a well defined part of a specific use case).

Jsx is the best UI system ever in terms of productivity - speaking from experience: I’ve implemented production apps using dozens of UI frameworks/platforms - Html, WYSIWYG, Flash, WindowsForms, WebForms, Ajax, Asp.Net Mvc, Razor, WPF, Xaml, Silverlight, Knockout, Handlebars, PhoneGap, Ionic, Bootstrap, MaterialUI, Angular2, React w/ Class Components, React w/ Mobx, React w/ Hooks

I can tell you pros/cons of each of those. But at the end of the day I can develop an entire app in days in React+Hooks which would take me weeks in most any other.


Have you written up your views about various UI frameworks somewhere? I'd be very interested in reading a comparison from someone who's used so many.


No, but it's a great ideal for a blog post! When I have a chance, I'll let you know.


> Remember when it was a good idea to cache the .length property because it was O(N) on some browser (I think an early IE), so you'd write your for-loops like this?

I do not, mostly because I came online at a time that Internet Explorer was already uncool and Mozilla was clearly better, and also I had no idea what JavaScript was ;) But I can only imagine what the browser monoculture of the time was like, viewing the echo of it many years later with Blink.


From the linked bug:

> I think I can reproduce this behavior on Figma's code base too. I've never noticed because we use Chrome pretty much exclusively for development.

We're already there. Chrome dominates so much of the dev market, so naturally v8 is what people are optimising towards.


In this particular case, they were not optimizing for Chrome. They generated the code that was obvious to generate.

Because of a perf bug in WebKit they added a flag to generate less correct code that is fast in WebKit.

If anything, it's a case of optimizing towards WebKit.

But really, it's a case of performance bug in WebKit and them adding a work-around.


Don't worry, everyone still has to test for the captive audience of iOS Webkit users.


JS is so ultra-optimized by modern browsers that we're going to run into cases like this. It's inevitable.

Hopefully they're always treated like bugs and not something we need to work around.


The JSC team (and I suspect V8 and SM) always treat performance cliffs caused by normal/sensible code patterns as a bug.

The problem is that when implementing improvements it's easy to forget every mostly-equivalent thing and so miss obvious things (often the entire fix in the end is "if (canDoItFast) doTheExistingFastThing()"


I can imagine some workarounds for that. Go to GitHub and pull the tests for a few thousand libraries, and watch for major performance changes.

I mean I'm sure they have a solution, since this seems like a rare occurrence.

I will note though that const/let are not mostly equivalent to var. They're often used interchangeably, but they work pretty differently as far as a JS engine is concerned.


This has been happening for a long time, sadly. Years ago Chrome had poor optimization around try-catch and this meant it couldn't be used in the application I was working on, as we pretty much targeted only Chrome. I guess by the time async-await got into the engine try-catch was optimized enough and now this doesn't seem to be an issue.

Hopefully this changes, even slowly, but it'll probably take a while since Chrome's got so much share.


Chrome never used Webkit’s JavaScript Core.


Thanks, I removed that bit.


>Looks like the JavaScriptCore people are on it :)

Well we know pizlonator hangs around on HN from time to time :) Especially on topic of JS and Webkit.


Yes, he does, as well as a number of WebKit people. The team is fairly plugged in with what's happening online, you'll also see them around on Twitter and such, which is pretty nice. Actually, there's even a large display in the office that's set to display tweets mentioning WebKit in real-time, although that might be a little less relevant for the JavaScriptCore people since they seemed to mostly WFH even pre-pandemic ;)


> I really hope we don’t end up with JavaScript code being written to be optimized well on one particular implementation of a JavaScript engine…

You're late by many years, most frontend developers have been explicitly optimizing for Chrome for a long time now.


Also, I've generally decided that using "var" is a bad idea because it is way too easy to get tripped up with its funky scoping rules. I pretty much only use "let" and "const".


This already happens regularly with C and C++, where a large majority of developers attribute to those languages compiler specific behaviours that aren't written anywhere on the standard.


Well yes, there's the huge "GNU C(++)" dialect of course; if nothing else I think Clang's existence has helped there although of course it tended to adopt a lot of the things outright in the past. This is really something you always get when there are multiple non-reference implementations of something and one has significantly larger market share.

But, I think a more relevant example might be people who write code prefixed with comments like "Destructure this manually. At -O2 on GCC this allows the first field to be placed in a register" which is putting waaay too much dependence on the internals of a specific implementation, beyond even whatever extensions a particular compiler claims to support.


My comment applies also to optimizations.

Because I have seen those kind of discussions too many often, even back on Usenet days, like the example you point out.


> I really hope we don’t end up with JavaScript code being written to be optimized well on one particular implementation of a JavaScript engine…

I'm afraid that ship has already sailed.


But const/let are not equivalent, semantically to var.


They aren’t exactly the same thing, but in this case there’s a replacement being done to get essentially the same thing as a “fix” so they’re evidently close enough for this case.


If it's really a concern, what are all those transpilers for if not this kind of thing?


I can't even remember the last time I used var when writing JavaScript. Scoped const/let are just easier to reason with.

This seems bad.


It's incredible that such a common case can be affected so dramatically and yet nobody noticed it until now. Really hammers home the problem of having a browser monoculture.


This isn't completely a browser monoculture issue though. It might be part of the issue, but it's very likely because most big applications use tooling that hide this issue, i.e. using tools like Babel which just translate let/const into var automatically. I imagine it's never really been noticed because it just never really has been a "real world" problem.


You make a good point that a Babel configuration targeting pretty old browsers would convert const/let to var, though I don't think you can say it necessarily isn't a "real world" problem. It could very well be a real problem for a significant (Safari-using) minority of users, who don't see it as a bug and so it never comes to the attention of developers, but continues causing subtle headaches for thousands of people.


There's roughly 1+ billion devices running Safari _daily_, I'm not sure we should torture a Safari bug into a problem with Chrome


You misunderstand. The problem is that all the devs, and something like 3/4 of regular users, all use Chrome, meaning something like this can fly under the radar, going unnoticed, and make Safari or Firefox a second-class experience. It's only a "problem with Chrome" insomuch as Chrome has an unreasonably large market-share.


Surely professional or hobbyist software developers are more likely than the average user to use Safari or Firefox, not less.


Definitely not, at least in the case of front-end devs. Speaking as one myself, part of this is that Chrome's dev tools are so much better. Part of it is that the great majority of your users are on Chromium, so that is the most important case to test. But of course it's a self-reinforcing cycle: the more devs that only use Chrome, the more apps that work best on Chrome, the more users that switch over to Chrome, the more devs who only feel the need to test on Chrome, repeat.


That depends. CSS prototyping is usually easier in Firefox. For example grid editing, flexbox, shape path editing.

I mostly switch to Chrome when debugging PWA. Other then that Firefox gives me better UX.


It's recommended to install your goalposts with concrete footings that go below the frost line to prevent movement.


To add: any project or random script that hasn't at least considered babel/webpack probably isn't at a point where they would notice a 10x slowdown - the difference could be 10ms vs 100ms.


It would blow your mind if you saw how much slower querySelector is compared to old school getElementByid, getElementsByClassName etc.

We tend to think that newer is better and faster, but it's also wise to think that old apis had to be fast on old hardware. Hardware that is not the target of new APIs.


That wouldn't blow my mind, and it's also a wildly different question. The OP is about a core language feature performing differently on two browsers. querySelector is fundamentally a higher-level API across all browsers.

Also, block-scoping is a language feature that's fundamental to virtually every other modern language out there. This isn't some high-falutin' new API. This is basic.


> how much slower querySelector is compared to old school getElementByid, getElementsByClassName

IE 8 got querySelector before IE 9 got getElementsByClassName, so it’s not really an old vs. new thing, and I don’t think anyone expects querySelector to be faster than getElementById.


Presumably it's because everyone is using babel to transpile down to ES5 for old browser support.


Or, maybe, it's an edge (ha!) case with little practical relevance because – as but one possibility – absolute numbers are low and dominated by network latency or rendering.


Does it? It’s not like people are constantly running comparison benchmarks of var/const/let between browsers.


If every large JS app on Safari is 10x slower than it is on Chrome, and the user base of Safari has been dealing with that since who-knows-when, and no developer noticed because they're all testing on Chrome, that's a problem.


Yeah. A problem Safari's developers should address. Apple is not a wee baby company that lacks the resources to keep up with Chrome in any dimension.


Or they are all compiling their JavaScript to legacy JavaScript specs for WebKit.


It's Apple's problem to solve though, I mean, outside of developers writing code that's more efficient to begin with so it doesn't really matter when it's 10x slower, but they should probably be doing that anyway.


no, it really hammers home that this distinction doesn't matter because the absolute numbers are small compared to everything else.


If you're transpiling the JS code to ES5 then you're deploying 'var'. That's what actually matters.


I use var when I pop open devtools as a REPL. const/let are annoying there because copy/paste into the REPL multiple times throws rebinding errors. Just goes to show there are jobs for every tool :)


I do wonder how many projects actually have const and let in their production code, vs. how many are writing them, but still compiling their code down to ES5.


Until IE11 dies, just about everyone compiles down.


There are quite a few markets where IE11 is about dead already, I think - the question is, how many?


Exactly. I saw a tweet the other day with someone explaining the difference between var/const/let and I responded with "never use var...ever". Maybe that was bad advice.


I don't mean this as a call-out but I thought this comment from the maintainer was really interesting:

> I think I can reproduce this behavior on Figma's code base too. I've never noticed because we use Chrome pretty much exclusively for development.

I'm more and more concerned that the monoculture of chrome for developer tooling is making the web a worse place...


Not sure if its still valid, but i remember doing a performance testing for "hello world"*10mil in nodejs and golang around 4-5 years back. What i noticed was "var" and "const" gave comparable results to golang, but "let" was around 100x slower for no reason. It was just a single variable declaration with "print" statement called multiple times.


If it was written like

  while true {
    let hello = "world";
  }
The scope would keep getting freed then re-allocated, which will be slower than simply continuously setting an existing variable's memory location over and over again.


While the creation and destruction of a scope every loop iteration is the correct mental model for what this code does, there's no reason that implementations must actually perform any allocations - especially for an example like this where it is trivially determinable that no bindings escape the immediate scope.


That would apply to a pure interpreter, but all JS runtimes will do at least some optimisations, even on their baseline profile.


If that was the reason, then shouldn't "const" be as slow as "let"?


It should be valid to move `const` outside of the loop, since by definition it cannot be changed.

`let` can be changed so cannot be optimized out this way (at least not without doing more complex analysis and proving that the variable is not being changed).


let wasn't inside the loop, which is what made it weird.


node const.js 6.58s user 1.25s system 60% cpu 13.007 total

node let.js 7.00s user 1.35s system 51% cpu 16.272 total

node var.js 7.21s user 1.22s system 55% cpu 15.183 total

go run main.go 0.80s user 0.94s system 17% cpu 10.064 total


you're not measuring what you think you are


I still have a silly dream that one day Javascript will split the numeric type into 32-bit floats and ints by default, like Ruby does. I wonder how much of the existing ecosystem that would break.


We've got various number types in array form: {u}int{8,16,32}, floats too.

I feel like that is about the best we can do without harming backwards compatibility. They are there when needed, but we are stuck with the ambiguous "number" type in general.


They are actually 64-bit floats (doubles) and unlimited integers in Ruby.


And now JS also have unlimited integers


Trying to optimize for browsers is an abyss. There's a handful of really unintuitive behavior because js is slow but modern browsers have put in herculean efforts to optimize the engines rather than tackle the language.

It's kind of pointless to even try and optimize for things like this because there's a very good chance that it can change in the news few weeks/months/years.


The Chrome team's direct advice is to not optimize this way, and rather to write as idiomatically as possible, because it is likely that they are aware of the issue (e.g. it's a new language feature and they are collecting usage data before doing optimizations), and it will improve with time.


Bug that is tracking this in webkit land is https://bugs.webkit.org/show_bug.cgi?id=199866


Follow up: If you ever encounter a case where webkit/jsc is significantly slower than other engines they will happily take bug reports at bugs.webkit.org as the JSC team is always looking for tests that show suboptimal compilation that occurs in the real world.


I ran into this when working on fengari: debugging with V8 people, using `const`/`let` pushes the variable onto a stack of things to pop at end of scope; while `var` does not. This was accepted as intentional and that if I really cared about speed I would use `var`...


I would expect that a let at a function scope should be equivalent to a var inside a block


Somehow people here are blaming Chrome for a bug in Safari.


I appreciate all the new features in new JS, and I will continue using the oldest features possible for everything because they are likely the most tested, and least complicated.

It also allows me to build sites which work in Netscape 3.0 and up without any special compatibility modes. :)


Careful. The newer features of JS are generally added because they are less complicated than the features they're succeeding.

For instance, "var" may seem easier, but "const" is definitely less complicated. "var" has unintuitive edge cases with closures that "const" does not. `var` is also scoped to the nearest function rather than the nearest block, a "feature" that has tripped up many a new JS developer. `const` has none of these surprises.


Except `const` ruins interactive rapid prototyping using the browser DevTools console. If `width` was declared `const width = 800` you can't interactively experiment by setting `width=1024` because you get `..TypeError: Assignment to constant....`


I mean, it's literally a constant. Of course you can't redefine it. You said it was a constant! :P That's like saying that declaring a variable as "8" instead of 8 ruins rapid prototyping because now you can't multiply it.


More like replacing declarations with magic number literals. And that does hurt interactive prototyping if it happens a lot!


> Of course you can't redefine it.

Well, you can redefine it by editing its source code file on disk. (Unless the file is read-only.) But it's a lot faster to be able to redefine it interactively inside the DevTools console and I wish there was a "dev" setting that allowed this. Also, you said "you declared it" but usually it wasn't me who declared it constant. Rather, some other programmer thought `width=800` or `numColumns=80` should be declared constant. "Variables aren't; constants won't be." No coder is an island: one programmer's bad `const` can't make life harder for other coders. Related: One reason I use Chromium browsers instead of Firefox is because in Firefox DevTools console once you declare `class A {}` you can't interactively change it to, say, add a constructor like `class A { constructor (foo) { this.foo = foo } }` During learning and experimentation this is lousy; you can't interactively iterate at the console prompt like a proper dynamic language. You must instead put your code in an external file and set up a "save file and re-run everything" system just like static languages like C++ require.


I'm also a big fan of hacking away in the browser console. Can't you use the `let foo = class {}` syntax for iterating on classes?


If you are the one writing entirely new code then you can choose to do that. But if you must build upon the work of a different programmer who is not you and that programmer chose `class Foo{}` instead of `Foo=class{}` then the codebase is now rigid and inflexible for purposes of prototyping and experimentation. You are going to have to save the JavaScript to a disk file and edit it there to get rid of the non-redefinable `class Foo{}` or `const numColumns=80`.


class A defines a let variable. It's how the standard define it.

You can replace A with: A = class {...}

This works with any browser.

BTW. In Firefox there is a multi line editor. With code completion. I actually find more convenient to test large code fragments in fox because of that.


Good tip, thanks, but it means that code written like

  class A {}
has a huge difference from code written like

  A = class {}
In classical JavaScript consider how small the difference is between

  function foo () {}
and

  foo = function () {}
Function hoisting only works for the "function foo(){}" form so there's a difference but they intuitively work essentially the same and either form works well when experimenting in the console. If enough rules like "class A {} is very different from A = class {}" are piled onto JavaScript it's going to become like C++, a language so complex that people would rather learn a simpler language so they can focus on the complexities of their actual problem space rather than the complexities of their language. A new language called Zig is now trending upward because a growing number of programmers would rather have a language they can 100% understand after a reasonable time investment rather than a language like C++ where even after a 20 year career with it and implementing portions of a compiler for it you still get painfully surprised by some obscure rule interaction you never dreamed to imagine possible.


Misleading title? It seems far more likely this is specifically related to the js engine than the renderer.


Are there any browsers using Webkit that aren't using JavascriptCore? Now that Chrome uses Blink, I'm pretty sure anyone using v8 is probably using Blink as well?


I agree, while it may not be entirely technically accurate to say WebKit, I do think it's specific enough, and also a more broadly known technology than Javascript Core


As I recall some samsung browsers and the like use it? Webkit is removing support for non-JSC engines, and I remember the samsung people as well as some others being upset about this.


We removed support for non-JSC engines immediately after the Blink form.


And Blink likewise removed support for non-V8. All the major browser engines are relatively tightly coupled to the JS engine at this point (not impossibly so, but there's definitely a tighter coupling than there was historically).


It would be easier to swap JS engines than other browser engine components probably, but yeah, not trivial. The patches to make WebKit support choice of JSC or V8 were janky in the first place and constrained what we could do.


This takes me back to VB6 days where placing a dim in the sub would be too costly if that sub was called iteratively and using a global was the only way to optimize it because of the poor stack allocations.

Would love to see what semantics are making let/const abysmal.


So this is why we need multiple browser implementations. Competition fosters excellence. Imagine if Chromium had 100% market capture. Would they be incentivised to fix this?


This problem doesn't exist at all in a world where chromium is the only browser, because there is no other party to get it wrong. Note that this is a problem with safari mishandling something, not chrome. That doesn't mean a single browser world is better. Just this particular instance if anything is a point in the pro column for browser monoculture.


Safari is plagued with these issues, it's the new IE. The whole focus seems to be on implementing privacy features to advertise Apple's privacy focus. CSS bugs are not fixed and newer APIs are not implemented.


The question is: does JavascriptCore implement other ES6/ES7/ES8 features just as inefficiently? Does it mean we should always transpile to ES5 even on browsers that support more modern variants?


Reading the issue comments, I still don't quite get why this is the case. Anybody with a better handle on PL theory and/or JS internals care to ELI5?


Perhaps the linked, seemingly related WebKit bug may be interesting? https://bugs.webkit.org/show_bug.cgi?id=199866


TLDR seems like var isn't scope-bound, but let is, and actually calculating the scope for let statements is what's causing the slowdown?


Damn you Olin Shivers!


wild speculation: let/const is implemented by creating a new scope with every let or const declaration. Looking up variable then ends up tracing linearly through all of these scopes. Bundlers end up creating a huge number of let/const and thereby scopes. Then every variable reference ends up scanning through this huge linear list of scopes to find the variable it refers to.


Why would there be a whole new scope for every declaration, instead of just each block? To prevent access within the block but before the declaration?


Probably was simplest to implement, and then not looked at again.


Yes, I think creating a new scope for every let/const would be the simplest (but not very efficient) way to accomplishing that.


While we're all here.. Fellow JS developers, please, for the love of all that's good, stop using const for everything. If you want a strongly typed language with immutable variables, just go use another language already. Just use let unless you actually need a const.

(I don't care if I get a million down votes, I really don't. You're all misusing const and you know it. It's a shitty hack turned into a fad by someone who didn't understand the language.)


People (more or less) can’t use another language if they’re targeting the browser.

Also, in general, using the most limited constructs that fit your use case makes a tonne of sense for code cleanliness, in any language. For example, in a language supporting private/protected/public properties and methods, you should always default to private, as that’s most limited and easiest to reason about. Then make it more public as necessary. Same goes for const vs. let - const is more limited, it’s a better default choice, only use let if you actually need to re-assign it. Using const means there’s one less thing for readers of the code to worry about, it’s a variable that CAN’T be reassigned, the same way you don’t have to worry about a private method being called externally.

This is, essentially, the “principle of least power.”


The problem is that, when prototyping and iterating a design I often don't know in advance what should be private or const and what shouldn't be. And when interactively experimenting in the DevTools console I often want to redefine `const shadowMapSize=1024` to `shadowMapSize=2048` but const breaks interactive console redefinition. I have to save the js file to disk and edit shadowMapSize there, then reload the code. This is the workflow for a static language like C++, not a dynamic language like Lisp or JavaScript.


This seems like a realllly weak reason to write less maintainable code (harder than it needs to be for future readers to reason about). For workflows like this, you can easily use let/var while prototyping. Then once you’re done and are cleaning up the code, writing tests, etc., switch to const if the reference is never re-assigned. It’s the same in every language - when prototyping, go wild with god functions, mutable state, public everything, whatever, but clean it up and make it readable/maintained before it gets merged to master.

Also, if you’re talking about leaving mutability like this in the code long term, this really only applies to GLOBAL mutable state, and global mutable state is a terrible thing for code maintainability in any language. If it’s not global, you can’t fiddle with it later in the console anyways. Littering your program with global mutable state just because “someday, someone may want to fiddle with this in the console, and this makes that slightly easier”, that’s just not a reasonable argument.


Immutable references, mutable data structures.

You're welcome to go use another language that doesn't have a formal spec, if you're so inclined to disagree with what the spec for this one says.


The keyword instructs the human to immediately unassign any mental overhead reserved for tracking changes to the value while `let` or `var` indicates that further digestion is required to determine potential. I see some value in that. Stop using `const` for declaring named functions, though.


Your efforts are better spent contributing upstream: https://github.com/eslint/eslint/blob/master/docs/rules/pref...


Could you explain what you think an "actual need" for const looks like?


EXACTLY.


Well, the thing is, if your answer is "never" then I'm less inclined to agree with you. I think "don't use it ever" is a much weaker argument than "here's where you should and shouldn't use it".


Why? What's the problem?

const improves readability, and the ability to reason about code. It lessens the need to exhaustively scan the code looking for other assignments to the variable.


You should focus less on constraining yourself to arbitrary ideologies and focus more on writing good code that’s easier to maintain and understand.


Could you explain why? I don’t really know JS.


const prevents rebinding a name but doesn't affect the variable's mutability.

    const foo = {};
    foo = 42; //not allowed
    foo.bar = 'baz'; //this is fine


Yes but why do you consider that bad?

It’s the same behaviour in many other languages.

Why would rebinding variables ever be a good thing?


Isn't that literally the same as java or a const pointer in c/c++?

const fixed the pointer but don't prevent user change the value in the storage it point to.


There are two possibilities in C++. I am not sure what the situation is in Java.

It is possible to define a pointer to an object whose contents may not be changed. This is done with ‘const * type object’.

If you want a pointer whose value must not change, but do not care about the contents of the pointed to object, you would write ‘type * const object’.


You ought to have mentioned that pointers of the latter type are quite rare in C. A mention of const pointer practically always means a pointer-to-const: a pointer you can't write through. Often you can still "re-bind" such pointer to another object – in both ways the polar opposite of JS const.

Of course, in C++ there are const references as well, and in this case const also means you can't modify the referenced object (in any case in C++ you can never re-bind a reference).


It's the same in Java and never caused any problems. You just have to understand what const does. What the OP is looking for is Object.seal() and Object.freeze().

This is a pointless discussion.


Sure, this describes the problems with JS's const, but doesn't really describe why it shouldn't be used?


Most JS people consider it perfectly fine to use it. It's a just a tool. A minority wants to make a storm in a cup of water because they read this[1]

[1] https://jamie.build/const


He’s an angry chap isn’t he?


If you ever feel compelled to google him, you may want to grab some popcorn first :)


Hmm, some good points there. I think I'll avoid const now.


I didn’t think any of these points were very strong.

const x = {immutableValue}

That tells me the thing is fully immutable. Whether it’s a global variable or not, that’s still nice to know when reading the code! Likewise:

let x = {immutableValue}

const x = {mutableValue}

These tell me different things. The first tells me I should look out for re-assignment, the second that I should look out for mutation of the value.

In contrast, this:

let x = {mutableValue}

Clearly has the most cognitive load. Anything could happen with this. If it’s a bigger, more complex function/whatever, “what happens to x, it could be anything” is just one more thing to fit in my head as I try to figure out what this code does. Enough little uncertainties like this, and I have to give up understanding by reading entirely, and resort to a debugger. Had the author limited the scope of what was possible, it’d be a bit easier to understand, with really no drawbacks. It’s not the end of the world, but “small, simple, easy thing I can do to make the code easier to understand for future readers” ... why not do that?


Really, I just have a somewhat FP bias, and I'd really enjoy it if const was actually immutable, since it'd be a nice concise way of expressing that. It's just not as useful as you'd think given how often it seems to be used (by contrast I almost never see anyone actually making anything immutable in JS).


I like FP too, but most languages, including FP ones, have both mutable/immutable references and mutable/immutable values. And in JS, things like strings, numbers and booleans are immutable value, plus immutable.js is quite popular for immutable data structures, especially in React+Redux apps.


Nice projection. I say over-using const is a fad (which it is) so you repeat it. I have zero idea who that guy is. Not even bothering to click, actually, as I'm sure he's just going to say what I said.


Ironically, I think your comment falls into the same cognitive trap as the const vs let thing: the always-use-let argument is usually that const does not mean immutable data structure and that fact is confusing, so you should let to signal that the data structure is mutable, even though what let really means is to make the binding mutable, not to indicate anything about the data structure it points to. The counter-argument - despite what it may seem - is not the opposite stance, but rather that because bindings and data structures are two different things, conflating the two is confusing.

Likewise, I'm saying that a minority is vocal because reading a rant from a OSS celebrity either reaffirms their preconceptions or sways them through aggressiveness (both of which are objectively true, as I have witnessed cases of both), but you're accusing me of projecting (presumably because you think that I'm making a claim about you personally - which is not true).

Consider that some of the words you use are weasel words ("overusing", "fad"), which, IMHO imply a tautology (i.e. "I think X, therefore X", as opposed to "The facts are X, therefore Y"). I originally said that the spec is clear about what const and let are. Implying that following the spec is a fad is needlessly derogatory and doesn't address the double standard with regards to the confusing-ness of const vs let.


There's a record & tuple proposal that would address this use case (currently served by tools such as Immer):

  const foo = #{};
  foo = 42; //not allowed
  foo.bar = 'baz'; //not allowed
https://2ality.com/2020/05/records-tuples-first-look.html


const is useful when multiple part of a program might refer to the same object/value, blocking rebindings is what you need to keep all the references in sync.


You seem to be confusing const iwth the functionality of Object.seal() and Object.freeze(). While const only makes the reference immutable, with Object.freeze() you can actually make the data immutable. Once you got this, there is not problem with using const all the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: