Hacker News new | past | comments | ask | show | jobs | submit | syg's comments login

Exactly right. `arr[-1]` means `arr["-1"]` and already does something.


It is also a breaking change to use new syntax and functions since old browser does not support new features. In this perspective `arr[-1]` seems a fair breaking change.


No, because changing browsers to interpret `arr[-1]` as `arr[arr.length - 1]` breaks existing sites that expect `arr[-1]` to be interpreted as `arr['-1']`: That is, the value stored on object `arr` at key name '-1'.

Changing browsers to interpret `arr.get(-1)` as `arr[arr.length - 1]` doesn't affect any old code using `arr[-1]`.

It's not about supporting old browsers. It's about supporting old code.


I think you're confusing your application with the language itself.

Adding new syntax and functions to the language is not a breaking change. Old code will continue to work.

If you start using these new features in your application, and it no longer works on old browsers, then sure that's a breaking change. But that's a choice for you to make. The language is still backwards compatible.


`indexes[haystack.indexOf(needle)] = true`

There's a valid example of code that would be broken (`indexOf` returns `-1` as "not found"). Is it a good way of solving whatever the author was trying to do? Probably not, especially now that sets exist. Is it code you might conceivably find on hunreds of sites across the past decades of the world wide web? You bet.

Yes, we could introduce another "use strict". But we only just got rid of the one via ESM (which enforces strict mode). That was a one-off hacky solution to a hard problem coming off the end of a failed major version release of the language (look up ECMAScript 4 if you get a chance). We don't want to see a repeat of that.


WeakRef and FinalizationRegistry will ship in Chrome 84.


I didn't know it was this close to production finally, thank Jebus!


Early error behavior is proposed to be deferred (i.e. made lazy), not skipped. Additionally, it is one of many things that require frontends to look at every character of the source.

I contend that the text format for JS is no way easy to implement or extend, though I can only offer my personal experience as an engine hacker.


If early error is deferred then it's no longer early... that's all I meant by skipped. It still is a semantic change that's unrelated to a binary AST.


Indeed it's a semantic change. Are you saying you'd like that change to be proposed separately? That can't be done for the text format for the obvious compat reasons. It also has very little value on its own, as it is only one of many things that prevents actually skipping inner functions during parsing.


The gzip point aside (which is not an apples-to-apples comparison as gzipping a big source does not diminish its parse time), I see the response of "JS devs need to stop shipping so much JS" often. My issue with this response is that multiple parties are all trying to work towards making JS apps load faster and run faster. It is easy enough to say "developers should do better", but that can be an umbrella response to any source of performance issues.

The browser and platform vendors do not have the luxury of fiat: they cannot will away the size of modern JS apps simply because they are causing slowdown. There can be engineering advocacy, to be sure, but that certainly shouldn't preclude those vendors from attempting technical solutions.


Please do not disparage an entire committee because you disagree philosophically with one proposal in it.


I don't disagree with just one proposal. I disagree with multiple proposals and the committee's approach to the "process".


This is important, as there seems to be a lot of misunderstanding in this thread.

What's proposed is structural compression of JS with JS-specific bits to speed things up even more. What's proposed is not compiled JS, in that the original JS is not meaningfully transformed at all. There is a very explicit design goal to retain the original syntactic structure.

OTOH WebAssembly is like a new low-level ISA accessible from web browsers. To use wasm, one does have to compile to it. As the parent here says, compiling JS -> wasm currently requires at least a GC but also much more. To engineer a performant VM and runtime for a dynamic programming language like JS is a time-consuming thing. It is curious to see so many folks think that JS can straightforwardly be compiled to wasm. Currently wasm and JS serve very different needs; and I also don't think JS will be going anywhere for many, many years.

Edit: formatting


This is a very good point. I would also add that there are a lot of languages compiling to JavaScript that would similarly not benefit from wasm. Right now, pretty much all GC-ed languages compiling to JS (such as ClosureScript, Elm, Scala.js, BuckleScript, PureScript, etc.) are in that category. Even if/when wasm supports GC in the future, I expect that dynamically typed languages compiling to JS will still be a long way from benefiting from wasm.

However, all these languages can benefit from the Binary AST right away. Just like any codebase directly written in JavaScript. If the Binary AST has some decent story about position mapping (see [1] which I just filed), they might even get somewhat better tooling/debugging support going through the Binary AST than going through .js source files, out of the box.

[1] https://github.com/syg/ecmascript-binary-ast/issues/17


For context: position mapping is somewhere on our radar, but we haven't reached a stage at which it would make sense to start working on it yet.

If you have ideas and/or spare cycles, of course, they are welcome :)

As a side-note: I believe that we could build upon the (very early) proposed mechanism for comments to also store positions. Size-optimized BinAST files would drop both comments and positions, while debugging-optimized BinAST files would keep both at the end of the file so as to not slow down parsing until they are needed.


That seems awesome. I'm glad that it's on your radar.

If you can point me to the best place to suggest ideas or spend my spare cycles, I would gladly do so. At the very least, I can comment on how we serialize positions in the (tree-based) Scala.js IR, which is size-optimized.


The tracker on which you're posting is a good place, thanks a lot :)


Off-topic: I'm sorry to go a little squishy, but I thought I should say that I appreciate the work both of you do very much... though sjrd's work is a little bit more "direct-impact-for-me" at the moment, I must admit. :p

Of course, as you just both said, your work is kind of complementary... which is always nice. :)

Anyway, thanks for the long-term thinking to the both of you.


thanks :)


A lot of the languages you mentioned have different enough memory allocation characteristics than JavaScript due to immutability and functional style that they would probably benefit from having a garbage collector tuned to their purposes in webassembly. There's a reason we don't have one common garbage collector for all the managed languages.

I do recognize that this is a side point, but I think it's worth mentioning.


A lot of the mentioned languages also allow deep interoperability between their heap and the JavaScript heap, e.g., circular references between objects of the "two heaps", and free access to fields and methods of objects of the other language.

That's very hard (if not impossible) to achieve without leak and performance degradation if the two languages have their own GC, with their own heaps.

Compiling a language to JS is not about making it work. That's easy (it becomes hard to cite a language that does not do it). It's about designing the language to interoperate with JS, and making that work. That is the real challenge.


> Compiling a language to JS is not about making it work. That's easy (it becomes hard to cite a language that does not do it). It's about designing the language to interoperate with JS, and making that work. That is the real challenge.

It's very interesting that Scala and Scala.js have such a relatively painless interaction, but in general I'd say interoperation is "technically" simple by just employing an FFI?

Obviously, words like "seamless" and "effortless" start to enter the vocabulary here, but I'm not entirely these targets are worth it. Are they, do you think?

(I mean, obviously, Scala.js must have seamless 'interop' to Scala, but is 'seamless' introp with JS worth it, or should you require explicit FFI? I'm not sure, but I think you ultimately chose FFI-via-annotations, but there's a lot of fuzziness wrt. js.Dynamic.)


Don't you think that someone is eventually going to compile a JVM to wasm which would allow languages that compile to JVM bytecode to run directly as standard JVM bytecode in the browser? Wouldn't that allow to have as good performance as JS compilation? (I am asking you as it looks like you might have some expertise on languages that compile to both JVM and JS ;))


It might allow you to have as good performance as JS compilation, but definitely not as good interoperability with JS. Some of those languages, like ClojureScript, Scala.js and BuckleScript, have complete 2-way interop between them and JS, including for mutable objects, their properties and their methods.

"Just" compiling Scala to JVM-on-wasm does not give you the real power of Scala.js, which is its interoperability with JavaScript libraries. Similarly, just compiling Clojure to JVM-on-wasm does not give you the real power of ClojureScript.

People often forget about the interop with JS part--which is immensely more important than raw performance--if they don't actually work with a language that offers it. :-(


Shu here. I'm the person drafting the memory model for the SharedArrayBuffer spec, and as Dave says, it'll be the basis for the wasm story as well.

Lars Hansen deserves most of the credit for the actual spec -- I'm just doing the memory model. :)


Shu,

The concurrency/memory model nerds out here would love to see an early draft if at all possible :)

If nothing else, is it going to be weaker than sequential consistency?


The current draft is available at http://tc39.github.io/ecmascript_sharedmem/shmem.html

The two strengths provided by the model are sequentially consistent atomics and something between the strengths of C++'s non-atomics and relaxed atomics. Races are fully defined, and there is no undefined behavior or undefined values.

I'm happy to discuss things more in a new thread or in private communication and would prefer to not derail this thread about VLC.


Thank you!


The slowness of functional methods like .map and .forEach for a time was due to their not being self-hosted. Since then, both V8 and SpiderMonkey self-host them, and bz has posted some numbers below [1].

But perf problems are more numerous still for these functional methods, because compilers in general have trouble inlining closures, especially for very polymorphic callsites like calls to the callback passed in via .map or .forEach. For an account of what's going on in SpiderMonkey, I wrote an explanation about a year ago [2]. Unfortunately, the problems still persist today.

[1] https://news.ycombinator.com/item?id=7938101 [2] http://rfrn.org/~shu/2013/03/20/two-reasons-functional-style...


I actually don't think we're sharing any code with Gordon. But yes, the name is this obscure transitive-closure reference: (Adobe) Flash -> (Flash) Gordon -> (Gordon) Shumway.


wow. as someone born in the late '80s, thank you.


To be more precise, JITs on top of JITs. :)

We have both an interpreter for ActionScript bytecode as well as a compiler that compiles that bytecode method-at-a-time to JavaScript using a restructuring approach like emscripten's relooper.

Disclaimer: I work on Shumway.


I could give you the old guy's rant, "Back in my day we were happy to have only one 'core' and if it ran at 8Mhz it was in turbo mode!" :-) Its an amazing piece of work.


Bah! Back in my day, we counted ourselves lucky if we had an 8 MHz crystal clock. I had to make do with an uncomfortably temperature-dependent RC oscillator running at what I lightheartedly hoped was about 4 MHz.

(I'm actually pretty young, but I've done work with microcontrollers, which feels like stepping into the past. I know a guy in his late 20s who steadfastly refuses to switch from assembly to C, for reasons that come straight out of the 70s. It's a strange world we live in.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: