Hacker News new | past | comments | ask | show | jobs | submit login

V8 engineer here: basically, a serialized walk through the object tree, starting from the top-level script. Predominantly, this means bytecode, scoping metadata, and strings.



Is x86/ARM binary code ever cached? Or is that not feasible? Seems like that could save a lot of time?


Good question: unfortunately not. We only really generate machine code for hot, optimized functions, using TurboFan (all the other machine code, like builtins and the interpreter, is loaded from the "snapshot"). This code is pretty execution specific, and will include things like constants, raw pointers, logic specific to the hidden class transition trees, etc.. Relocating this when serializing/deserializing a cache would be expensive, and we'd probably have to also serialize half the state of the current VM heap, so it would overall be a net loss.

Additionally, our current caching is only after initial top-level execution, where hopefully[0] you haven't run enough code for it to be hot yet, so there wouldn't be any optimized code to cache anyway.

[0] I say "hopefully" because getting code hot during initial execution would usually mean that execution is stalling the main thread at startup, which isn't great.


I'd like to think so, though wouldn't dynamic typing insist branching would still be possible so there's still likely some runtime checking before being sure a precompiled bit is valid. All fascinating stuff.


I think you solve that by dynamic function specialization and perhaps also trace compilation? https://en.wikipedia.org/wiki/Tracing_just-in-time_compilati...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: