The global scope polluter has pretty bad performance and interop surprises, you shouldn't depend on it and instead use getElementById even if it's a bit more verbose.
It uses a property interceptor which is fairly slow in v8:
Is there a reason to not use querySelector, since it’s a lot more flexible? One reason jQuery became so popular is because the DOM was painful to use. Things like querySelector fix that.
getElement is slightly faster, but not by enough to care IIRC so I use querySelector for consistency and it's flexibility.
> One reason jQuery became so popular is because the DOM was painful
I would say that is the key reason, with everything else being collateral benefits. Assuming you combine element selection, dealing with legacy incompatibilities, and function chaining to reduce boilerplate code, under the same banner of "making the DOM less painful".
My experience is that most developers tend to guess at performance and throw away numbers they disagree with. As a result performance testing is only something product owners care about.
In isolation definitely, but in real world code it might be faster to use querySelector for branchy code if it doesn’t always use an id. As with everything, if it’s not performance-sensitive write the code that’s easier for humans to read, and if it is measure first.
I'm not sure what you're trying to say here, as it's tautologically correct that getElementById can't be used in cases where you want to select on more than just the id.
Do you mean a use case where you have branchy code that produces a selector string that has some id only paths?
Yes. Branchy code which could sometimes use getElementById and other times use querySelector may be faster if it always uses querySelector, even if that call itself is slower. The reason for this is that the JITs sometimes deoptimize on branchy logic with inconsistent property access between branches. They also deoptimize on branchy logic defining intermediate values, but much less often when the value is a consistent type like a string (selector).
This would only be relevant if you're doing something like
var theFunction = condition ? "querySelector" : "getElementById";
...
document[theFunction](...)
it won't apply to
if (condition)
document.querySelector(...)
else
document.getElementById(...)
As from the point of view of the runtime the latter has two call sites, and each one is monomorphic and will very quickly (first layer of the JIT tower generally) become a Structure/Shape/HiddenClass check on `document` followed by a direct call to the host environment's implementation function (or more likely the argument checking and marshaling function before the actual internal implementation).
It is possible that the higher level JITs pay attention to the branch counts on conditions or use other side channels for the deopt, but for host functions it's generally not something that will happen as the JITs see natively implemented functions as largely opaque barriers - they only have a few internal (to the runtime itself) cases where they make any assumptions about the behaviour of host functions.
> As from the point of view of the runtime the latter has two call sites, and each one is monomorphic
I expected that to be the case but I’ve actually measured it and it’s not always. It is, when the object being accessed has a consistent shape/hidden class, as you mention, but a lot of times they don’t. A weird case is native interfaces because while the host functions are opaque and you’d expect they have a stable shape the interfaces themselves are often mutable either for historical reasons or shortcuts taken in newer proposals/implementations. Accessing document.foo isn’t and can’t be monomorphic in many cases, even if it can be treated that way speculatively. But branchy code can throw out all sorts of speculation of that sort. I don’t know which level of the JIT this occurs at, I’m just speaking from having measured it as a user of the APIs.
This isn't me disagreeing, just me being surprised and trying to think of why the optimizer falls off.
JSC at least has flags on the structure that track which ones will bollocks up caching (e.g. the misery that is looking up things in the prototype chain if the object in question has magic properties that don't influence the structure).
One thought I have is if your test case was something like
if (a)
obj.doTheNativeThing()
else
obj.doTheOtherNativeThing()
(or whatever)
and you primed the caches by having a being true/false be a 50/50 split, vs all one way. My thinking (I have not done any of the debugging or logging) is that the branch that isn't taken won't insert any information about the call target. I can see that resulting in the generated code in the optimizing layers of the JITs being something along the lines of
if (a)
call _actualNativeFunction
else
deopt
The deopt terminates the execution flow so then in principle the VM gets to make assumptions about the code state after the whole if/else block, but more importantly the actual size of the code for the function is smaller, and so if you were close to the inlining limit dropping the content of the else branch _could_ result in your test function getting inlined, and then follow on optimizations can happen in the context of the function that you use to run your test with. Even if there aren't magic follow on optimizations removing the intermediate call can itself be a significant perf win.
Testing the performance of engines was super annoying back when I worked on JSC, as you have to try and construct real test cases, but that means competing with your test functions being inlined. JSC (and presumably other engines) have things you can do (outside of the browser context) to explicitly prevent inlining of a function, but then that is also not necessarily realistic. But it's super easy to accidentally make useless test cases, e.g.
function runTest(f) {
let start = new Date;
for (let j = 0; j < 10000; j++)
f()
let end = new Date;
console.log(end - start)
}
function test1() {
...
}
function test2() {
...
}
runTest(test1)
runTest(test2)
In the first run with test1, f (in runTest) is obviously monomorphic, so the JIT happily inlines it (for the sake of the example assume both functions are below the max inlining size). The next run with test2 makes f polymorphic so runTest gets recompiled and doesn't inline. Now if test1 and test2 are both small the overhead of the call can dominate the cpu time taken which means that if you simply force no inlining of the function you may no longer be getting any useful information, which is obviously annoying :D
Also there's a performance cliff when you have a lot of unique ids (or selectors in use from JS).
When you hit the cache querySelector is primarily a getElementById call and then some overhead to match the selector a second time (which chrome should really optimize):
That’s surprising, webkit+blink and I’m guessing gecko all optimize the query selector cases. I assume it’s the cost of the NodeList (because NodeLists are live :-/)
I actually just went and tested and in webkit at least my 100% perfect test case I had querySelector taking 2x longer than getElementById. I tried understanding what the current webkit code does but the selector matching code is now excitingly complex due to the CSS JIT.
Many many years ago I recall querySelector starting out with a check for #someCSSIdentifier and shortcutting to the getElementById path, but maybe my memory is playing tricks on me.
Yup that's what it did, and Chrome still does. After much research and prototyping the CSS JIT didn't improve real world content (especially given the complexity) so it was never added to Chrome.
I'm surprised to find that this trick still works even in the new backwards-incompatible JavaScript Modules (using <script type="module">), which enables "strict" mode and a number of other strictness improvements by default.
I believe it works because the global object ("globalThis") is the Window in either case; this is why JavaScript Modules can refer to "window" in the global scope without explicitly importing it.
This seems like a missed opportunity. JavaScript Modules should have been required to "import {window} from 'dom'" or something, clearing out its global namespace.
There is some effort to standardize something along these lines. Well, some things which combined would achieve this. It’s too late to bake it into ESM, but I believe it’ll be possible with ShadowRealms[1] and/or SES[2], and Built-in Modules (JS STL)[3].
I don’t know what jokes or Reddit thread you’re referring to or why it has anything to do with my comment referencing three technical proposals, but I’ll take your word for it that you don’t have to repeat them here.
https://www.reddit.com/r/javascript/comments/t8mdli/future_j... if you’re curious. Didn’t mean to be overly vague, I just think ShadowRealms is an unusually funny name for a technical proposal. I guess it doesn’t have the same effect if you didn’t grow up watching the same cartoons.
I don't think this article is complete. It mentions no pollution, which is true of window and most HTML elements, but not always. Check this out, you can set an img name to getElementById and now document.getElementById is the image element!
<img id="asdf" name="getElementById" />
<script>
// The img object
console.log(document.getElementById);
// TypeError: document.getElementById is not a function :D
console.log(document.getElementById('asdf'));
</script>
I tried poking around for security vulnerabilities with this but couldn't find any :(
It seems that the names overwrite properties on document with themselves only for these elements: embed form iframe img object
Thank you for this! I had a feeling it wasn't a security issue. I closed my ticket saying it might be one due to finding websites mentioning Dom clobbering
Even if you're not using modules, using an IIFE avoids all this by making your variables local instead of having them define/update properties on the global.
This is one of those things that pops up every year or two years. Unfortunately, the person writing about the new discovered weird trick almost always fails to precede the article with a big, red, bold "Please don't ever do this".
That doesn’t help people who stumble upon this when searching for the problem. All the “look it up” response does is make sure the search results are a bunch of content saying “look it up”, which isn’t really that helpful.
This used to be done quite a lot in the early JS days when scope was kind of thrown out the window (no pun) and you just did whatever dirty thing you needed to in order to make a page work.
lol, I just searched "problem with referencing named element ids as javascript globals": the first result is the linked article and the second result is, you guessed it, this thread with your comment on top.
Yeah, HTML5 explicitly documented the compatible behaviors between browsers to reach uniformity, which meant standardizing a lot of weird stuff instead of trying to fix it.
Yep, same here. The only time I use this bit of knowledge nowadays is in the console. If I see a tag has an ID, I save myself a few characters by just referring to it as a variable since I know it's already there anyways.
IDs were the only way to get a reference to an element early on if I'm remembering correctly. Or maybe the DOM API just wasn't well known. All the examples and docs just used IDs, that I can remember for sure.
This always reminded me of PHP’s infamous register_globals. For those unfamiliar, anything in the ‘$_REQUEST’ array (which itself comprises of $_POST, $_GET, and $_COOKIE merged together) is added to the global scope. So if you made a request to index.php?username=root, $username would contain “root” unless it was explicitly initialized it before it was used.
iirc this doesn't work in Firefox? or at least it doesn't work the same way as in Chrome. I developed a tiny home-cooked app[0] that depended on this behavior using desktop Chrome which then broke when I tried to use it on mobile Firefox. I then switched it to using
document.getElementById
like I should have and everything worked fine. Like others in this thread, I recommend not relying on this behavior.
You can make this yourself with Proxy. I get a lot of mileage out of this:
// proxy to simplify loading and caching of getElementById calls
const $id = new Proxy({}, {
// get element from cache, or from DOM
get: (tgt, k, r) => (tgt[k] || ((r = document.getElementById(k)) && (tgt[k] = r))),
// prevent overwriting
set: () => $throw(`Attempt to overwrite id cache key!`)
});
The Netscape of the 90s wasn't interested in making features ‘safe’. They were about throwing out features as quickly as possible to see what would stick.
The simplest possible syntax is to make named elements available globally, and if that clashes with future additions to the DOM API then well that's a problem for some future idiots to worry about.
as a strategy it worked pretty well, unfortunately
I saw this "shortcut" used in code snippets, on online JS/CSS/HTML editors like JSFiddle. It did not even occur to me this was part of JS spec, I thought the editor was generating code behind my back!
If you read the article and the spec, you'll see that any explicitly created variables will always take precedence over automatic IDs, so any globals will always override these IDs.
In the additional considerations section [1], they mention about not consistent behaviors between browsers. Those are the kind of issues that are quite difficult to debug.
You shouldn't be using IDs anyways. They are just bad for a lot of reasons. You can only have one on a page and it reduces your reusability. Use classes instead.
ID's aren't bad, they're unique identifiers, and useful for (deep) linking to specific pieces of content within documents. Please use ID's as liberally as you please, and use them for their proper use.
For me, the disadvantage above any listed on the blog is that if I saw this global variable referenced in some code (especially old code, where some parts might be defunct), I would have absolutely no idea where it came from, and I bet a lot of others would struggle too.
This was mostly useful back in the days when we had to manually query dom during development and debugging. I've seen some pretty horrible things but never have I seen this in a codebase, not even in a commit
I remember using it on the first Javascript I ever used around 20 years ago. I naively assumed that the DOM was like state in a more procedural language and this variable trick played into that.
the ID's in DOM will never conflict or cause an issue with your own JS code. You can't reliably use 'named access on the window object' (the name of this feature) because of this, so it's never a problem, and also largely useless.
*rigmarole, if we're being pedantic, but I suspect the contemporary spelling "rigamarole" is gaining on the proper spelling, and that's one of the wonderful/terrible things about the English language.
I don't want to sound like I have an axe to grind (but I do), but this is the kind of feature/wart that shows the age of the HTML/CSS/JS stack.
The whole thing is ripe for a redo. I know they get a lot of hate, but of all the big players in this space I think FB is the best equipped to do this in a way that doesn't ruin everything. I just wonder if they have an incentive (maybe trying to break the Google/MS hegemony on search?).
I think that this is the case (right now) because of Apple's stranglehold on the browser on iOS and the complex relationship between Google/Apple.
If FB could launch a browser on iOS that was in their walled garden, not only would it quickly receive wide adoption but it might become people's primary browser.
Not that I necessarily think that's a good thing, mind you.
Why would it quickly receive any adoption? Of all of the behemoths, I would trust FB the least here. Not that I trust any of the other big players enough not to use Firefox everywhere I can.
Web developers have worked around quirks for as long as I can remember. The stack has many warts, but we learn to adapt to them. Like 90% of a web developer's job is working around gotchas, and will continue that way. A 'redo' might not be needed. Developers need something to moan about and need something to keep them employed :)
I find it pretty funny that we humans have invented all these transpilers and bundlers, invested probably billions of dollars in JITs, just to keep writing JS
It uses a property interceptor which is fairly slow in v8:
https://source.chromium.org/chromium/chromium/src/+/main:out...
to call this mess of security checks:
https://source.chromium.org/chromium/chromium/src/+/main:thi...
which has this interop surprise:
https://source.chromium.org/chromium/chromium/src/+/main:thi...
which in the end scans the document one element at a time looking for a match here:
https://source.chromium.org/chromium/chromium/src/+/main:thi...
In contrast getElementById is just a HashMap lookup, only does scanning if there's duplicates for that id, and never surprisingly returns a list!