function test(text) {
const a = 1
function inner() {
console.log(eval(text))
}
inner()
}
test("() => { return a }")
prints 1 to the console
This happens because the closure's context object is shared between all closures in a given scope. So as soon as one variable from a give scope is accessed through a closure then all variables will be retained by all inner functions.
Technically the engines could be optimizing it when no eval used is detected or when in strict mode (which blocks eval), but I guess that dynamically dropping values from a closure context based on inner lexical scopes can be really tricky thing to do and probably not worth the overhead.
[later edit: I re-read the example code, and the issue is a double capture and the author not really understanding the semantics of capture in JS, I've given a more technical breakdown and explanation at https://news.ycombinator.com/item?id=41113481]
Nah, in the es3.1 or 5 era we (tc39) fixed the semantics of eval to make it explicit that scope capture only occurs in a direct/unqualified eval (essentially eval becomes a pseudo keyword).
The big question is scope capture - in JSC I implemented free vs captured variable analysis many many years ago, though the primary reason was to be able to avoid allocating the activation/scope object for a function at all, though that did allow us to handle this case (I’m curious if JSC does still do this).
The problem with doing that is that it impacts developer tools significantly.
(To be continued after changing device)
(Edit: Continuing)
Essentially developers want to be able to debug their code, but you run into issues if they start debugging after page load because if you do optimize this, then anything they may want to inspect may have been optimized away, at that point you (the engine dev) can't do anything to bring it back, but you could say "from this point on I'll keep things around" (JSC at least in the past would essentially discard all generated code when the dev tools were opened). You might also say "if the dev tools are enabled (not open)" then I'll always do codegen assuming they'll be needed. Or you might say "if the dev tools are open on page load generate everything as [essentially] debug code" (which is still optimized codegen, but less aggressive about GC related things).
All of those options work, but the problem you then have is potentially significant changes in behavior if the dev tools are enabled and/or open - especially nowadays where JS exposes various weak reference types where this kind of change directly impacts the behaviour of said weak references. So now the question becomes how much of a behavioural difference am I willing to accept between execution with or without dev tools involved.
It's possible (I haven't worked directly on browser engines in a number of years now) that the consensus has become "no difference is ok" - this kind of space leak is not super common, etc and the confusion from different behavior with/with-out dev tools might be considered fairly obnoxious.
This happens because the closure's context object is shared between all closures in a given scope. So as soon as one variable from a give scope is accessed through a closure then all variables will be retained by all inner functions.
Technically the engines could be optimizing it when no eval used is detected or when in strict mode (which blocks eval), but I guess that dynamically dropping values from a closure context based on inner lexical scopes can be really tricky thing to do and probably not worth the overhead.