As with asm.js it wouldn't need to be universally accepted: It can "sneak in the back door" by continuing to work fine in browers that "just" supports standard javascript.
If code using this method clearly labels the bytecode and interpreter and the part that background loads the "real thing", then implementations can opt to add whatever optimisations they like to speed it up as it stabilises . Doesn't matter if the bytecode changes, as long as it's labelled properly so the optimised versions falls back to just interpreting the JS if it comes across a version (of the interpreter/bytecode as a whole, or just a single opcode) it doesn't understand (or that the implementer hasn't seen a need to optimise).
If the interpreter is guaranteed to retain a certain structure, it could be very easy to just "unroll" the interpreter loop and selectively JIT portions of the bytecode based on hotspots. You can optimise that a lot in a non-bytecode specific way by annotating the interpreter loop with assertions that grants extra guarantees (immutable bytecode; markers to indicate which code is only interpreter scaffolding; decoding hints; if you also tack on "labels" for each instructions, implementations can special case on individual instructions that "settle" while still handling new instructions/changes by inlining the interpreter code.
> What does a specific bytecode buy you beyond slightly shorter load times (which the emterpreter already gives a way to greatly reduce)
The full speed from the start; note the substantially lower speed for the first little part. And the example codebase is small compared to some of the things people want to run.
I think we sort-of agree. I don't necessarily think there's a reason to specify a standard bytecode, exactly because this approach could conceivably be extended to effectively give us a "mostly standard" bytecode with the freedom to continue to change the format without a lengthy committee approach, because there's a demonstrably viable fallback.
If code using this method clearly labels the bytecode and interpreter and the part that background loads the "real thing", then implementations can opt to add whatever optimisations they like to speed it up as it stabilises . Doesn't matter if the bytecode changes, as long as it's labelled properly so the optimised versions falls back to just interpreting the JS if it comes across a version (of the interpreter/bytecode as a whole, or just a single opcode) it doesn't understand (or that the implementer hasn't seen a need to optimise).
If the interpreter is guaranteed to retain a certain structure, it could be very easy to just "unroll" the interpreter loop and selectively JIT portions of the bytecode based on hotspots. You can optimise that a lot in a non-bytecode specific way by annotating the interpreter loop with assertions that grants extra guarantees (immutable bytecode; markers to indicate which code is only interpreter scaffolding; decoding hints; if you also tack on "labels" for each instructions, implementations can special case on individual instructions that "settle" while still handling new instructions/changes by inlining the interpreter code.
> What does a specific bytecode buy you beyond slightly shorter load times (which the emterpreter already gives a way to greatly reduce)
The full speed from the start; note the substantially lower speed for the first little part. And the example codebase is small compared to some of the things people want to run.
I think we sort-of agree. I don't necessarily think there's a reason to specify a standard bytecode, exactly because this approach could conceivably be extended to effectively give us a "mostly standard" bytecode with the freedom to continue to change the format without a lengthy committee approach, because there's a demonstrably viable fallback.