Sucrase is proof that JS is not the problem when it comes to slow performance. JS is not slow. NodeJS is not slow. It's the code that is slow. All these people wanting to write it in Rust or Go or XYZ programming language need to acknowledge this.
Yes, multithreading is awesome and really helpful but it's the cherry on top, not the whole thing. If the same amount of effort was put into optimizing the TSC codebase as it is being spent to rewrite it in Rust, I have no doubt that it can become faster. Perhaps it'll require some big changes but it won't create compatibility concerns and it won't be a cat-and-mouse race between the Rust version and the TS version.
I don't think "Write it in Rust" is always the solution to fast programs. Rust itself can be pretty damn slow if you don't keep performance in mind. That is why you have to optimize and profile and optimize over and over again. Can't the same be done for TSC?
I think the biggest reason devs don't do this is because no one likes profiling and optimizing since it is a slow and boring task. Rewriting is so exciting! It's the thing you do when you are tired of maintaining the old codebase. So just ditch it and rewrite it in Rust.
I have nothing against Rust, mind you. I love what it has done but I don't think rewriting everything is either feasible or even the solution. And waiting for that to happen for every slow tool out there is utter foolishness.
There was a back and forth on that subject a few years back: Mozilla rewrote their sourcemap parser from JS to naïve Rust (/WASM) for a 5.89x gain, mraleph objected to the need and went through an epic bout of algorithmic and micro-optimisations to get the pure JS to a similar level, then the algorithmic optimisations were reimported into the Rust version for 3x further gain (and much lower variance than the pure JS).
Sucrase consciously and deliberately breaks compatibility. Which, to be clear, isn't necessarily a bad thing for some use cases. But you can't really generalize from that to a tool like tsc where this isn't an option. There might be a performance ceiling here that can only be surpassed with a different language.
I suspect you have a point, given this line from the Sucrase readme:
> Because of this smaller scope, Sucrase can get away with an architecture that is much more performant but less extensible and maintainable. Sucrase's parser is forked from Babel's parser (so Sucrase is indebted to Babel and wouldn't be possible without it) and trims it down to a focused subset of what Babel solves. If it fits your use case, hopefully Sucrase can speed up your development experience!
I rewrote a card game engine, openEtG, from JS to Rust. It was a pretty 1:1 rewrite. 2x improvement even when I was using HashMap everywhere. I've since entirely removed HashMap, which has only further improved perf
I suppose it depends on your use case, but I don't really consider 2x to be a significant difference. Between programming languages we often speak in orders of magnitude.
If JS is only half the speed of a compiled language like Rust, that shows remarkably optimized performance.
Twice as fast is a big deal for damn near any program except like...unimportant, already slow background tasks or things that are already exceptionally fast. Cutting the frame render time in half could give you twice the framerate. Twice the performance on a server could let you handle twice as many concurrent sessions (and possible run half as many servers!). People regularly fight tooth and nail to squeeze 10% performance boosts on critical tasks, doubling it would be incredible
Plenty of game engines are already spending less than a millisecond of CPU time per frame in their own code, so 2x one way or the other makes almost no difference.
Things don't need to be "exceptionally" fast to be in the area where programming language doesn't really matter.
> Twice the performance on a server could let you handle twice as many concurrent sessions (and possible run half as many servers!)
Which might matter, or it might not. Very situational.
> People regularly fight tooth and nail to squeeze 10% performance boosts on critical tasks, doubling it would be incredible
That kind of task is a small fraction of tasks. And often you're best off using a library, which can often make its own language choices independent of yours.
JIT/interpreted languages (Java, JS, Python etc.) cannot compete with optimized code from compiled languages.
The tradeoff is that Rust is lower-level, so it is harder to write. If performance were the only point of comparison for a language, then we'd all be using assembly. We choose to trade performance for productivity when we're able to.
Average node.js app is slow. It requires godlike understanding/experience to make JS perform well. That why you see people re-writing JS to rust and go and zig, for example SWC, turbopack, esbuild and rome. For most use cases JS is plenty fast but average go code will be faster and easier to maintain.
As I am getting older, I do not want to spend my weekend learning about new features in next.js v13[1]or rewriting tests from enzyme to RTL[2]. I want to use programming language that value its users time and focus on developer experience.
Not sure if serious or not, but Firefox is a long-standing example of this. It has always been a mixed C++/JS codebase. (Since before it was even called Firefox, that is, though nowadays, it's also Rust, too.) I routinely point this out in response to complaints about the slowness attributed to e.g. "Electron". JS programs were plenty fast enough even on sub-GHz machines before JS was ever JITted. It's almost never the case that a program having been written in JS is the problem; it's the crummy code in that program. When people experience Electron's slowness, what they're actually experiencing is the generally low quality of the corpus that's available through NPM.
Arguably, the real problem is that GCC et al are enablers for poorly written programs, because no matter how mediocre a program you compile with them, they tend to do a good job making those programs feel like they're performance-tuned. Today's trendier technology stacks don't let you get away with the same thing nearly as much—squirting hundreds or thousands of mediocre transitive dependencies (that are probably simultaneously over- and under-engineered) through V8 is something that works well only up to a point, and then it eventually catches up with you.
Besides, there's no such thing as a fast or slow language, only fast and slow language implementations.
AFAIK all of the major browsers (and other JS runtimes) have implemented some performance-sensitive APIs in JS, specifically because it performs better than crossing the JS<->native boundary. Granted that’s usually specifically about JS API performance, but that’s a lot of where performance matters in a JS host environment.
How far can you get with JS/other interpreted things in e.g. optimizing for cache access etc? Sounds like you're at the mercy of the JIT compiler (which may go far, but still).
The interesting question to ask is whether these Rust rewrites are really taking advantage of cache optimization or if they are making simplifying assumptions that the canonical implementation cannot. In the latter case, Rust isn't the root of the performance difference and a JS rewrite can make most of those simplifying assumptions
JavaScript does not have any input/output (IO/syscalls) all those functions like reading a file, socket, etc needs to be implemented in the runtime language, like the browser, Node.JS, ASP, etc. So you are at the mercy of the runtime executable. The JS JIT slows down startup time as the JavaScript first have to be parsed and compiled before running, but the runtime can then cache the compiled version for faster startups. When JavaScript was new it was slow, for example string concatenation was slow, but most JS engines now implement string concatenation with a "rope" structure making it very fast. v8 (Chrome) and Spidermonkey (Firefox) have got a lot of optimizations over the years, so if you are doing things like incrementing a number in a loop it will be just as optimized as something written in a systems level language.
This can vary a ton as well. I was talking to a friend yesterday who said he was pounding a native browser interface with an iterator and experiencing slow performance. He switched to buffering the entire thing into memory first and experienced huge performance gains.
The aspect of the language you're using, if optimized, is virtually always optimized for the most common use-case. If your use-case isn't the most common use-case, you must account for this.
With stuff like TypedArrays pretty far. Where JS has problems is optimising in the face of the things you can do with the language and the current difficulty in multithreaded implementations.
As someone that contributed to swc-cli, surcease benchmarks are pretty bad. SWC run in sync mode, blocking main thread, in addition they are not using benchmark.js or isolated tests.
Yes, multithreading is awesome and really helpful but it's the cherry on top, not the whole thing. If the same amount of effort was put into optimizing the TSC codebase as it is being spent to rewrite it in Rust, I have no doubt that it can become faster. Perhaps it'll require some big changes but it won't create compatibility concerns and it won't be a cat-and-mouse race between the Rust version and the TS version.
I don't think "Write it in Rust" is always the solution to fast programs. Rust itself can be pretty damn slow if you don't keep performance in mind. That is why you have to optimize and profile and optimize over and over again. Can't the same be done for TSC?
I think the biggest reason devs don't do this is because no one likes profiling and optimizing since it is a slow and boring task. Rewriting is so exciting! It's the thing you do when you are tired of maintaining the old codebase. So just ditch it and rewrite it in Rust.
I have nothing against Rust, mind you. I love what it has done but I don't think rewriting everything is either feasible or even the solution. And waiting for that to happen for every slow tool out there is utter foolishness.