Hacker News new | past | comments | ask | show | jobs | submit login

If they focus on the core features and make a significant enough improvement, perhaps the core TS team could be convinced to pull it in as a native extension and make calls to it for the operations it covers? If something like 62x improvement is really achievable, it's hard to imagine the team not being interested at all.

I would personally be very happy to use a significantly slimmed down subset of TypeScript if it provided a 62x performance improvement. Slow type checking is currently the main drawback of TypeScript development in large projects.




I'm not saying this is impossible but I think you'd have to work rather carefully to not have to parse the code twice and effectively run two type checkers in parallel. Which, on second though, you could run the fast one then the slow one. Perhaps that's an option.

> I would personally be very happy to use a significantly slimmed down subset of TypeScript if it provided a 62x performance improvement. Slow type checking is currently the main drawback of TypeScript development in large projects.

We all think that until the slimmed down subset doesn't work with our favorite libraries. Sure, there may be people who use no dependencies, but I'd bet that there's sophisticated types in some very popular libraries that even low dependency programmers use.


To this point, many teams are already using subsets of TypeScript to improve compilation times. Often times it’s things like always declaring return types on functions to avoid the compiler having to infer it (especially between module boundaries). Calling this a “subset of typescript” might be strong wording, but it does suggest there is some desire in that general area.


It seems like depending on how you configure typescript (e.g, in tsconfig), typescript is already something like an ensemble of mini-languages with different dialects and semantics. Much more so than other languages. But I agree, restricted typescript that has to do less work (data or control flow analysis) would probably be on the orders of magnitude faster.


> To this point, many teams are already using subsets of TypeScript to improve compilation times. Often times it’s things like always declaring return types on functions to avoid the compiler having to infer it (especially between module boundaries).

I hadn't heard of this trick - how much improvement does it make? Seems like it might be good for readability, too?


I haven't found any large scale analyses but here's an example of a simple type annotation halving compilation time: https://stackoverflow.com/questions/36624273/investigating-l...

The Typescript compiler performance wiki might also be of interest: https://github.com/microsoft/TypeScript/wiki/Performance


I haven’t personally benchmarked it, but I know it’s available as an eslint rule (all exported functions must have declared return types).


This makes sense for semantic reasons though. If you accidentally return a `number` when you returned a `string` in some earlier branch, Typescript will happily unify them and now your function silently returns `string | number`. The linting rule prevents this.


I always thought this is because some people prefer this style.


One of the main requirements of TypeScript is that the compiler has to work in a browser. The monaco editor is the prince jewel of TypeScript applications.

So the only possibility I see is a webassembly alternative.


Not saying that the TypeScript team should actually do this, but Rust does have best-in-class WASM support.


> I would personally be very happy to use a significantly slimmed down subset of TypeScript if it provided a 62x performance improvement.

I'm just guessing but I think a lot of Typescript's really complex stuff is to support third party libraries with really complex types - at least they often refer to libraries they've improved support for when they discuss complex seeming additions. So I wonder how much of the JS ecosystem you'd have to cut off to get to such a subset. A particular example would be that I've often seen a chain of 20+ dependencies leading to Lodash, and I'd guess that Lodash is really complex to type. If you lose everything that's eventually dependent on something that's really complex to type, do you have much of NPM left?


As a maintainer for Redux, Redux Toolkit, and React-Redux, I can tell you that we would _love_ to have several additional features in TS that would make our library TS types much easier to maintain. Like, say, partially specified generics.

Tanner Linsley (author of React Table, React Query, and other similar libs) has expressed similar concerns.

(I'm actually starting to work on a talk for the upcoming TS Congress conference about "lessons learned maintaining TS libs", and this will be one of the things I point to.)


The issue here is that if you're running in the JS ecosystem you'll definitely want to use other people code (npm package or internal lib), if the subset breaks JS compatibility then you can break a significant amount of code without realising, if it is "only" a TS subset, then you need to make sure that each lib/package you import are compatible. Anyway this does not seem like a good solution.


to be fair, even within the TypeScript world that can be a problem. If typescript versions don't line up, or if your various strict options are too strict, you can get into really weird situations. It -generally- works, but when it doesn't, it's hell.


> I would personally be very happy to use a significantly slimmed down subset of TypeScript if it provided a 62x performance improvement. Slow type checking is currently the main drawback of TypeScript development in large projects.

Isn’t that already available by running in “transpile only” and having the type checking happen in the background?


62x improvement is not achievable. 6.2x might be.

edit: I see they are running it with 8 threads, in which case yes 50x is achievable. In any sizable codebase though, you should probably split into modules and run with some parallelism via e.g. `wsrun` or `turborepo`


esbuild is > 125x faster than Webpack, doing the exact same job. It's not some theoretical microbenchmark, you'll see it when converting a project from one to the other. If a software hasn't been designed with speed in mind, and I don't think tsc has, there can be massive performance improvements possible if you create a new implementation with performance as a top goal.


Webpack does way more than esbuild, including running a typechecking compiler instead of just transpiling, running compilers able to downlevel emit to ES5 and providing a deep plugin architecture allowing you to hook into any bit you like. But yes, it hasn't been designed with speed in mind - it has been designed for maximum extensibility instead. Its the same reason why Babel is slow compared to sucrase (written in JS, currently faster than SWC and esbuild but doing somewhat less - https://github.com/alangpierce/sucrase)

tsc has in fact been designed with speed in mind (I've been following the project since before it moved to GitHub, which is when they redesigned it for speed). Going beyond 1 order of magnitude performance improvement while implementing the full feature set is highly unlikely.


62X is still one order of magnitude.

I would bet on a port eventually being done and hitting at least 10X.


Well we have some intersection then, I think it will be 10x at most :)


Even if it’s not official i could use a slimmed-down, faster TypeScript. Even if it has slightly different semantics. I always use skipLibCheck so TS only checks my code.

It’s a form of linting, not semantics, so it doesn’t have to be exact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: