One thing to note is that it is impossible to strip types from TypeScript without a grammar of TypeScript. Stripping types is not a token-level operation, and the TypeScript grammar is changing all the time.
Consider for example: `foo < bar & baz > ( x )`. In TypeScript 1.5 this parsed as (foo<bar) & (baz > (x)) because bar&baz wasn’t a valid type expression yet. When the type intersection operator was added, the parse changed to foo<(bar & baz)>(x) which desugared to foo(x). I realise I’m going back in time here but it’s a nice simple example.
If you want to continue to use new TypeScript features you are going to need to keep compiling to JS, or else keep your node version up to date. For people who like to stick on node LTS releases this may be an unacceptable compromise.
It looks like the team has already considered this in one regard
> There is already a precedent for something that Node.js support, that can be upgraded seperately, its NPM.
Node bundles a version of npm that can upgraded separately, we could do the same with our TypeScript transpiler.
> We could create a package that we bundle but that can also be downloaded from NPM, keep a stable version in core, but if TypeScript releases new features that we don't support or breaking changes, or users want to use the new shiny experimental feature, they can upgrade it separately.
This ensures that users are not locked, but also provides support for a TypeScript version for the whole 3 years of the lifetime of Node.js release.
As long as Node understands to use the project-specific version of TypeScript (i.e., the one in node_modules or the PNP equivalent), that should be fine.
But it would be a step backward to need to globally upgrade TypeScript (as you do with npm), since some older projects will not be compatible with newer versions of TypeScript.
At first I thought that's not a big deal, I could manage that with `nvm`. But I think you're right, you really want it to pick up the project-specific typescript so that you're using ONE version of typescript for type checking, execution, and more.
The syntax from the perspective of type stripping has been relatively stable for more versions of Typescript than it was unstable. You had to reach all the way back to 1.5 in part because it's been very stable since about 2.x. The last major shift in syntax was probably Conditional Types in 2.8 adding the ternary if operator in type positions. (The type model if you were to try to typecheck rather than just type-strip has changed a lot since 2.x, but syntax has been generally stable. That's where most of Typescript's innovation has been in the type model/type inferencing rather than in syntax.)
It's still just (early in the process) Stage 1, but the majority of Typescript's type syntax, for the purposes of type stripping (not type checking), is attempting to be somewhat standardized: https://github.com/tc39/proposal-type-annotations
This is true, but in other cases they added keywords in ways that could work with type stripping. For example, the `as` keyword for casts has existed for a long time, and type stripping could strip everything after the `as` keyword with a minimal grammar.
When TypeScript added const declarations, they added it as `as const` so a type stripping could have still worked depending on how loosely it is implemented.
I think there is a world where type stripping exists (which the TS team has been in favor of) and the TS team might consider how it affects type stripping in future language design. For example, the `satisfies` keyword could have also been added by piggy-backing on the `as` keyword, like:
const foo = { bar: 1 } as subtype of Foo
(I think not using `as` is a better fit semantically but this could be a trade-off to make for better type stripping backwards compatibility)
I don't know a lot about parser theory, and would love to learn more about ways to make parsing resilient in cases like this one. Simple cases like "ignore rest of line" make sense to me, but I'm unsure about "adversarial" examples (in the sense that they are meant to beat simple heuristics). Would you mind explaining how e.g. your `as` stripping could work for one specific adversarial example?
function foo<T>() {
return bar(
null as unknown as T extends boolean
? true /* ): */
: (T extends string
? "string"
: false
)
)
}
function bar(value: any): void {}
Any solution I can come up with suffers from at least one of these issues:
- "ignore rest of line" will either fail or lead to incorrect results
- "find matching parenthesis" would have to parse comments inside types (probably doable, but could break with future TS additions)
- "try finding end of non-JS code" will inevitably trip up in some situations, and can get very expensive
I'd love a rough outline or links/pointers, if you can find the time!
Most parsers don't actually work with "lines" as a unit, those are for user-formatting. Generally the sort of building blocks you are looking for are more along the lines of "until end of expression" or "until end of statement". What defines an "expression" or a "statement" can be very complex depending on the parser and the language you are trying to parse.
In JS, because it is a fun example, "end of statement" is defined in large part by Automatic Semicolon Insertion (ASI), whether or not semicolons even exist in the source input. (Even if you use semicolons regularly in JS, JS will still insert its own semicolons. Semicolons don't protect you from ASI.) ASI is also a useful example because it is an ancient example of a language design intentionally trying to be resilient. Some older JS parsers even would ignore bad statements and continue on the next statement based on ASI determined statement break. We generally like our JS to be much more strict than that today, but early JS was originally built to be a resilient language in some interesting ways.
Thanks for the response, but I'm aware of the basics. My question is pointed towards making language parsers resilient towards separately-evolving standards. How would you build a JS parser so that it correctly parses any new TS syntax, without changing behavior of valid code?
The example snippet I added is designed to violate the rules I could come up with. I'd specifically like to know: what are better rules to solve this specific case?
> How would you build a JS parser so that it correctly parses any new TS syntax, without changing behavior of valid code?
I don't know anything about parsers besides what I learned from that one semester worth of introduction class I took in college but from what I understand of your question, I think the answer is you can't simply because we can't look into the future.
1. Automatic semicolon insertion would next want to kick in at the } token, so that's the obvious end of the statement. If you've asked it to ignore from `as` to the end of the statement (as you've established with your "ignore to the end of the 'line'"), that's where it stops ignoring.
1A. Obviously in that case `bar(null` is not a valid statement after ignoring from `as` to the end of the statement.
2. The trick to your specific case, that you've stumbled into is that `as` is an expression modifier, not a statement modifier. The argument to a function is an expression, not a statement. That definitely complicates things because "end of the current expression" is often a lot more complicated than ASI (and people think ASI is complicated). Most parsers are going to have some sort of token state counter for nested parentheses (this is a fun implementation detail of different parsers because while recursion is easy enough in "context-free grammars" the details of tracking that recursion is generally not technically "context-free" at that point, so sometimes it is in the tokenizer, sometimes it is a context extension to the parser itself, sometimes it is using a stack implementation detail of the parser) and you are going to want to ignore to the next "," token that signals a new argument or the next ")" that signals the end of arguments, with respect to any () nesting.
2A. Because of how complicated expression parsing can get, that probably sets some resiliency bounds on your "ignorable grammar": it may require that internally it still follows most of the logic of your general expression language: balanced nested parentheses, no dangling commas, usual comment syntax, etc.
2B. You probably want to define those sorts of boundaries anyway. The easiest way is to say that ignorable extensions such as `as` must themselves parse as if it was a valid expression, even if the language cannot interpret its meaning. You can think of this as the meta-grammar where one option for an expression might be `<expression> ::= <expression> 'as' <expression>` with the second expression being parseable but ignorable after parsing to the language runtime and JIT. You can see that effectively in the syntax description for Python's original PEP 3107 syntax-only type hints standard [1], it's surprisingly that succinct there. (The possible proposed grammar in the Type Annotations proposal to TC39 is a lot more specific and a lot less succinct [2], for a number of reasons.)
CSS syntax have specific rules for how to handle unexpected tokens. E.g if an unexpected character is encountered in a declaration the parser ignores characters until next ; or }. But CSS does not have arbitrary nesting, so this makes it easier.
Comments as in your example is typically stripped in the tokenization stage so would not affect parsing. The TpeScript type syntax has its own grammar, but it uses the same lexical syntax as regular JavaScript.
A “meta grammar” for type expressions could say skip until next comma or semicolon, and it could recognize parentheses and brackets as nesting and fully skip such blocks also.
The problem with the ‘satisfies’ keyword is a parser without support would not even know this is part of the type language. New ‘skippable’ syntax would have to be introduced as ‘as satisfies’ or similar, triggering the type-syntax parsing mode.
I understand that you can define a restricted grammar that will stay parseable, as the embedded language would have to adapt to those rules. But that doesn't solve the question, as Typescript already has existing rules which overlap with JS syntax. The GP comment was:
> For example, the `as` keyword for casts has existed for a long time, and type stripping could strip everything after the `as` keyword with a minimal grammar.
My question is: what would a grammar like this look like in this specific case?
It can’t strip what’s after the as keyword without an up-to-date TS grammar, because `as` is an expression. The parser needs to know how to parse type expressions in order to know when the RHS of the `as` expression ends.
Let’s say that typescript adds a new type operator “wobble T”. What does this desugar to?
x as wobble
T
Without knowing about the new wobble syntax this would be parsed as `x as wobble; T` and desugar to `x; T`
With the new wobble syntax it would be parsed as `x as (wobble T);` according to JS semicolon insertion rules because the expression wobble is incomplete, and desugar to `x`
The “as” expression is not valid JavaScript anyway, so the default rule for implicit semicolon does not apply. A grammer for type expressions could define if and how semicolons should be inserted.
TypeScript already has such type operators though. For example:
type T = keyof
{
a: null,
b: null
}
Here T is “a”|”b”, no automatic semicolon is inserted after `keyof`. While I don’t personally write code like this, I’m sure that someone does. It’s perfectly within the rules, after all.
While it’s true that TS doesn’t have to follow JS rules for semicolon insertion in type expressions, it always has done, and probably always should do.
This is just the default. Automatic semicolon insertion only happen in specific well-defined cases, for example after the “return” keyword or when an invalid expression can be made valid by semicolon insertion. Neither applies here.
You would also have to update your compiler. I guess you could phrase this as: you can't update your TS versions independently from your node.js version. But that's probably not an issue.
It’s an issue because node has a system of LTS releases, whereas TypeScript has quarterly updates, so the release cadence is different.
Updating node is much more fraught than updating TypeScript. For example, it may break any native code modules. That’s why users are directed to use the LTS and not the most recent release, so that there’s enough time for libraries to add support for the new version.
On the other hand, I usually adopt a new TypeScript version as soon as it comes out.
I made it sure to decouple the transpiler from node itself, the transpiler is in a npm package called amaro, that is bundled in node. The goal is to allow user to upgrade amaro indipendently so we dont have to lock a ts version for the whole lifespan of a release
TypeScript feels "boring" enough at this point that being a few years behind isn't gonna be an issue in most cases. For teams who want to stay on the absolute latest release of TypeScript but want to be more conservative with their Node version, external compilation will remain necessary; but for someone like me, where TypeScript has been "good enough" for many years that I'm not excited by new TypeScript releases, this feature will be really nice.
("Boring" in this context is a compliment, by the way)
EDIT: Though reading other comments, it seems like you can update the typescript stripper independent of node? That makes this moot anyway
But at the same time, it's good enough and has been good enough for many years. It's like how I'm sure EcmaScript 2024 contains cool new stuff, but if node only supported ES6, I would have no trouble writing ES6.
tangential protip: if you're using nvm to manage node versions, take a look at fnm as a superior replacement. (It can read the same .nvmrc file to switch on cd into a given dir, but it's faster and "cleaner" wrt impact on your shell.)
With a couple exceptions (like enums), you can strip the types out of TypeScript and end up with valid JS. What you could do is stabilize the grammar, and release new versions of TypeScript using the same grammar. Maybe you need a flag to use LTS grammar in your tsconfig.json file.
Mistake isn't the right word. It's just a tradeoff.
There was no perfect solution available, so a tradeoff was necessary. You can disagree with this particular tradeoff, but had they gone another way some people would disagree with that as well. To be a mistake there would have had to have been an option available that was clearly better at the time.
Anyway, the idea that TS 5 should be backwards compatible with TS 1 is probably a bad one. Personally, I think packages with wide usage break backwards compatibility far too easily -- it puts everyone who uses it on an upgrade treadmill, so it should be done very judiciously. But even I wouldn't argue that TS 1 should have been its final form.
I’ll flip this around… reusing comparison as angle brackets is the mistake. C++ ran into some issues too.
I think Rust made the really smart move of putting :: before any type parameters for functions. Go made the good move of using square brackets for type parameters.
The problem can be traced back to ASCII/typewriters only including three sets of paired characters, plus inequality signs, which is not enough for programming languages.
We really need five sets: grouping, arrays/indexing, records, type parameters, and compound statements. Curly braces {} are also overloaded in JS for records and compound statements, leading to x => {} and x => ({}) meaning different things.
Square brackets wouldn't work for parametric functions because f[T](x) already means get the element at index T and call it.
Everytime a standardization happens, part of human creativity gets suppressed. Before ASCII, people were inventing all sorts of symbols and even the alphabet was flexible to changes. After ASCII, we got stuck with a certain set of letters and symbols. Heck, even our keyboards haven't changed that much since then. I really think we need more symbols than just &@$#%^*}{][<>()/\_~|
That's an interesting topic. Do you happen to know if UTF-8 contains more "pair" characters? In Latex we call these delimiteres, but that's just my limited experience coming in from math side. I tend to agree that it would be helpful to have more kind of nesting/pairing/grouping/delimiting characters. The problem is my imagination is limited to what I know from the ASCII world, and so it goes... no idea what new sets would look like.
So many different pairs are available. I like Asian corner brackets 「」and French guillemets « ». The angle brackets 〈〉 are popular in CS/math papers I think, though they might be confused with <>.
I have had access to « » before, which led me to doing a double take when I first encountered a << on a work station with font ligatures. It was funny.
Which brings us to the philosophical question about paired characters: If we were to pick paired characters from a key set not available on everyone's keyboard, why must the paired characters even be used in real languages? Is that not actually actively detrimental when we end up needing the characters for real? Is this not why we even have escape characters to begin with?
Plus, must they be a part of anyone's real keyboard to begin with? What makes 「」any more valid than ¿? Could we not have saved ourselves a lot of mental strain if we solved it earlier on with a full set of truly uncommon characters?
I can imagine an alternate history, where some programming language in the late 70's made their editors with simple shortcuts (such as Ctrl+A for "array block") to input barely-if-ever-used yet low code character such as † or ‡, which would never be used outside of a string. And nowadays, with modern IDE's, we wouldn't even see those characters half the time. It would be syntax sugar, with blocks types stated in gutters and data types represented with colors or text.
Five sets, but at any given place in the syntax, not all five are possible. (I would add function calls to the list—so, six.)
In most languages, (grouping and compound statements) cannot syntactically appear in the same place as (indexing, records, type parameters, function calls). So we are immediately down to four.
Rust takes the approach that you use :: before type parameters, so they are easily distinguished from comparison operators at a syntactic level.
Go takes the approach that [] is just fine for type parameters—which seems pretty reasonable to me. In Go, there’s nothing that can be both indexed *and* take a type parameter.
> In Go, there’s nothing that can be both indexed and take a type parameter.
True, but TypeScript has a rule against type-dependent emit - it’s not allowed. Code must always do the same thing regardless of what the types are. And in any case JavaScript does allow indexing on functions, since functions are just objects.
You could argue that it was a C++ mistake. It makes parsing harder, but otherwise seems to work as expected, so I don't consider it a mistake, but you could at least argue that way.
But regardless if it was a mistake in C++, it's now a complete standard, used in C++, Java, C#, and other languages to denote type parameters.
I would argue that it would have been a mistake to break that standard. What would you have used, and in what way would that have been enough better to compensate for the increased difficulty in understanding TypeScript generics for users of almost every other popular language?
It's definitely the right choice for Typescript. You could have gone the Scala route and used [] for generics, but that is so heavily used in ts/js as arrays it would not have made any sense.
I think the kind of teams that always stay on top of the latest TypeScript version and use the latest language features are also more likely to always stay on top of the latest Node versions. In my experience TypeScript upgrades actually more often need migrations/fixes for new errors than Node upgrades.
Teams that don't care about latest V8 and Node features and always stay on LTS probably also care less about the latest and greatest TypeScript features.
I work on a large app that’s both client & server typescript called Notion.
We find Typescript much easier to upgrade than Node. New Node versions change performance characteristics of the app at runtime, and sometimes regress complex features like async hooks or have memory leaks. We tend to have multi-week rollout plans for new Node versions with side-by-side deploys to check metrics.
Typescript on the other hand someone can upgrade in a single PR, and once you get the types to check, you’re done and you merge. We just got to the latest TS version last week.
It's already the case for ECMAScript and I don't see why TypeScript should be treated differently when Node.js has to transpile it to JavaScript and among other things ensure that there are no regressions that would break existing code.
Unlike Python typing it's not only type erasure: enums, namespaces, decorators, access modifiers, helper functions and so on need to be transformed into their JavaScript equivalent.
To me beyond v4.4 or so, when it started being possible to create crazy recursive dependent types (the syntax was there since ~4.1 - it's just that the compiler complained), there weren't a lot of groundbreaking new features being added, so unless an external library requires a specific TS version to parse its type declarations, it doesn't change much.
If Node.js can run TypeScript files directly, then the TypeScript compiler won't need to strip types and convert to JavaScript - it could be used solely as a type checker. This would be similar to the situation in Python, where type checkers check types and leave them intact, and the Python interpreter just ignores them.
It's interesting, though, that this approach in Python has led to several (4?) different popular type checkers, which AFAIK all use the same type hint syntax but apply different semantics. However for JavaScript, TypeScript seems to have become the one-and-only popular type checker.
In Python, I've even heard of people writing types in source code but never checking them, essentially using type hints as a more convenient syntax for comments. Support for ignoring types in Node.js would make that approach possible in JavaScript as well.
Flow (by Facebook) used to be fairly significant in the JavaScript several years ago, but right now it's somewhat clear that TypeScript has won rather handily.
Before that there was the closure compiler (Google) which had type annotations in comments.
The annotation syntax in comments was a little clunky but overall that project was ahead of it's time.
Now I believe even inside google that has been transpiled to typescript (or typescript is being transpiled to closure, I can't remember which - the point is that the typescript interface is what people are using for new code).
Closure was also interesting because it integrated type checking and minification, which made minification significantly more useful.
With normal Javascript and typescript, you can't minify property names, so `foo.bar.doSomethingVeryComplicated()` can only be turned into `a.bar.doSomethingVeryComplicated()`, not `a.b.c()`, like with Closure. This is because objects can be indexed by strings. Something like `foo.bar[function]()` is perfectly valid JS, where the value of `function` might come from the user.
A minifier can't guarantee that such expressions won't be used, so it cannot optimize property accesses. Because Closure was a type checker and a minifier at the same time, it could minify the properties declared as private, while leaving the public ones intact.
> it could minify the properties declared as private, while leaving the public ones intact.
I don't think it ever actually did this. It renamed all properties (you could use the index syntax to avoid this) and just used a global mapping to ensure that every source property name was consistently renamed (no matter what type it was on). I don't think type information was ever actually used in minification.
So if you had two independent types that had a `getName` function the compiler would always give them the same minified name even though in theory their names could be different because they were fully independent types. The mapping was always bijective. This is suboptimal because short names like `a` could only be used for a single source name, leading to higher entropy names overall. Additionally names from the JS runtime were globally excluded from renaming. So any `.length` property would never be renamed in case it was `[].length`.
I'm asking because there's no accepted definition of what an unsound type system is.
What I often see is that the word unsound is used to mean that a type system can accept types different to what has been declared, and in that case there's nothing unsound about ts since it won't allow you to do so.
Could you explain how this isn't the type system accepting types "different to what has been declared"? Kinda looks like TypeScript is happy to type check this, despite `s` being a `number` at runtime.
That's a good example, albeit quite of a far-fetched one.
In Haskell land, where the type system is considered sound you have `head` functions of type `List a -> a` that are unsound too, because the list might be empty.
That option also exists, you can just leave out the `messUpTheArray` lines and you get an error about how `undefined` also doesn't have a `.toLowerCase()` method.
However this problem as stated is slightly different and has to do with a failure of OOP/subtyping to actually intermingle with our expectations of covariance.
So to just use classic "animal metaphor" OOP, if you have an Animal class with Dog and Cat subclasses, and you create an IORef<Cat>, a cell that can contain a cat, you would like to provide that to an IORef<Animal> function because you want to think of the type as covariant: Cat is a subtype of Animal, F<Cat> should be a subtype of F<Animal>. The problem is that this function now has the blessing of the type system to store a Dog in the cell, which can be observed by the parts that still consider this an IORef<Cat>.
Put slightly differently, in OOP, the methods of IORef<Cat> all accept an implicit IORef<Cat> called `this`, if those methods are part of what define an IORef<x> then an IORef<x> is necessarily invariant, not covariant, in <x>. And then you can't assume subtyping. So to be sound a subtype system would presumably have to actually mark contra/covariance around everything, and TypeScript very intentionally documents that they don't do this and are just trying to make a "best effort" pass because JavaScript has 0 types, and crappy types are better than no types, and we can't wait for perfect types to replace the crappy types.
> In Haskell land, where the type system is considered sound you have `head` functions of type `List a -> a` that are unsound too, because the list might be empty.
Haskell's `head` not is not an example of the type system being unsound (I stress this point because we've been talking about type system soundness, not something-else-soundness).
From the view of the type system, `head` is perfectly sound: if the list is empty, the resulting value is ⊥ ("bottom"). And ⊥ is an inhabitant of every type. Therefore, `head` returning ⊥ when given an empty list is perfectly fine. When you force ⊥ (i.e. use it any way whatsoever), an exception is thrown. See https://wiki.haskell.org/Bottom
This is very much not the same thing (or remotely analogous) to what we have in my TypeScript example. There, the code fails at runtime when I attempt to call `toLowerCase`, yes; what's worse is the slightly different scenario where we succeed in calling something we shouldn't:
class Person {
name: string;
constructor(name: string) {
this.name = name;
}
kill() {
console.log("Killing: " + this.name);
}
}
class Murderer extends Person { }
class Innocent extends Person { }
function populatePeopleFromDatabase(people: Array<Innocent | Murderer>): void {
// imagine this came from a real SQL query
people.push(new Innocent("Bob"));
}
function populateMurderersFromDatabase(people: Array<Murderer>): void {
// TODO(Aleck): come back and replace this with a query that only selects murderers.
// i wanted to get the rest of the code in place, and this type checks,
// so I'll punt on this for now and come back later when I wrap my head
// around the proper SQL.
// we're not actually using this anywhere just yet, so no biggie ¯\_(ツ)_/¯
populatePeopleFromDatabase(people);
}
// ... some time later, Bob comes along and implements the murderer execution logic:
const murderers: Array<Murderer> = [];
populateMurderersFromDatabase(murderers);
// Bob is about to have a really shitty day:
murderer.forEach((murderer) => murderer.kill());
It is not possible to write an analogous example in Haskell using `head`.
That’s not correct, there’s several ways the actual type of a value differs from what typescript thinks it is. But soundness isn’t a goal of typescript.
It's maybe useful to note in this discussion for some that "soundness" of a type system is a bit of technical/theoretical jargon that in some cases has specific mathematical definitions and so "unsound" often sounds harsher (connotatively) than it means. The vast majority of type systems are "unsound" for very pragmatic reasons. Developers don't often care to work in a "sound" type systems. Some of the "most sound" type systems we've collectively managed to build are in things like theorem provers and type assertion systems that some of us don't always even consider useful for "real" software development.
Typescript is a bit more unsound than most because of the escape hatch `any` and because of the (intentional) disconnect between compiler and runtime environment. Even though "unsound" sounds like a bad thing to be, it's a big part of why Typescript is so successful.
There's nothing arcane or particularly theoretical about soundness. It means that if you have an expression of some type, and at runtime the expression evaluates to a value, the value will always be of that type.
For example if you have a Java expression of type MyClass, and it gets evaluated, then it must either throw (so that it doesn't produce any value) or produce a value of type MyClass: either an instance of MyClass, or of one of its subclasses, or null. It will never produce an instance of some other class, or an int, or anything else that isn't a valid value for the type MyClass.
In addition to helping human readers reason about the code, a sound type system is a big deal for a compiler: it makes it possible to compile the code AOT to fast native code, without inserting a bunch of runtime checks and dynamic dispatching to handle the fact that inevitably some of the types (but you don't know which) are wrong.
The compiler implications are what motivated the Dart language's developers to migrate from an unsound to a sound type system a few years ago:
https://dart.dev/language/type-system#the-benefits-of-soundn...
so that they could compile Flutter apps AOT. This didn't require anyone to make their code resemble what you'd do in a theorem prover — it just means that, for example, all casts are checked, so that they throw if the value doesn't turn out to have the type the cast wants to return.
TypeScript is unsound because when you have an expression with a type, that tells you nothing at all for sure about what the value of the expression can be — it might be of that type, or it might be anything else. It's still valuable because you can maintain a codebase where the types are mostly accurate, and that's enough to help a lot in reading and maintaining the code.
The key factor is that typescript is not a language, it is a notation system for a completely independent language.
The purpose of typescript is usefully type as much javascript as possible, to do both this and have a sound type system it would require to change javascript.
Definitely to get the most ergonomic programming experience, while also having a sound type system, you'd need to change some of the semantics of the language.
A prime example is that if you index into an array of type `T[]`, JS semantics mean the value you get back could be undefined as well as a `T`. So to describe existing JS semantics in a sound type system, the type would have to be `T | undefined`, which would be a big pain. Alternatively you could make the type `T` and have that be sound, but only if you make the runtime semantics be that an out-of-bounds access throws instead of returning undefined.
That's true but misleading: if "any" and "unknown" were the only types, then "any" would be indistinguishable from "unknown" and you'd really have just the one type. Which makes the type system sound because it doesn't say anything.
If your type system has at least two types that aren't the same as each other, then adding "any" makes it unsound right there. The essence of "any" is that it lets you take a value of one type and pretend it's of any other type. Which is to say that "any" is basically the purified form of unsoundness.
> there's no accepted definition of what an unsound type system is
Huh?
The cheeky answer would be that the definition here is the one the TypeScript documentation itself uses[1].
The useful answer is that there’s only one general definition that I’ve ever encountered: a type system is sound if no well-typed program encounters type errors during its execution. Importantly, that’s not a statement about the (static) type system in isolation: it’s tied to the language’s dynamic semantics.
The tricky part, of course, is defining “type error”. In theoretical contexts, it’s common to just not define any evaluation rules at all for outwardly ill-typed things (negating a list, say), thus the common phrasing that no well-typed program must get stuck (unable to evaluate further). In practical statically-typed languages, there are on occasion cases that are defined not to be type errors essentially by fiat, such as null pointer accesses in Java, or escape hatches, such as unsafeCoerce in practical implementations of Haskell.
Of course, ECMAScript just defines behaviour for everything (except violating invariants in proxy handlers, in which case, lol, good luck), so arguably every static type system for it is sound, even one that allows var foo: string = 42. Obviously that’s not a helpful point of view. I think it’s reasonable to say that whatever we count as erroneous situations must at the very least include all occurrences of ReferenceError and TypeError.
TypeScript prevents most of them, which is good enough for its linting use case, when the worst possible result is that a buggy program crashes. It would definitely not be good enough for Closure Compiler’s minification use case, when the worst possible result is that a correct program gets silently miscompiled (misminified?).
Yeah, Flow had the ambition to be sound but has never accomplished it.
If you read the Flow codebase and its Git history, you can see that it's not for lack of trying, either — every couple of years there's an ambitious new engineer with a new plan for how to make it happen. But it's a real tough migration problem — it only works if they can provide a credible, appealing migration path to the other engineers across Facebook/Meta's giant JS codebase. Starting from a language like JS with all the dynamic tricks people use there, that's a tough job.
(And naturally it'd be even harder if they were trying to get any wider community to migrate, outside their own employer.)
Flow doesn't even check that array access is in-bounds, contrast to TypeScript with noUncheckedIndexedAccess on. They're clearly equally willing to make a few trade-offs for developer convenience (a position I entirely agree with FWIW)
Neat example, thanks! I hadn't known TS had that option. Array access was actually exactly the example that came to mind for me in a related discussion:
https://news.ycombinator.com/item?id=41076755
I wonder how widely used that option is. As I said in that other comment, it feels to me like the sort of thing that would produce errors all over the place, and would therefore be a real pain to migrate to. (It'd be just fine if the language semantics were that out-of-bounds array access throws, but that's not the semantics JS has.) I don't have a real empirical sense of that, though.
of course this created an interoperability nightmare with third party libraries, which irrevocably forked Google's whole JS ecosystem from the community's 20 years ago and turned their codebases into a miserable backwater.
Teaser has a mangle props feature. The mangle props by exact name or pattern has worked for me, but it might affect library objects or browser built-ins that I definitely don't want it to, but I've not gotten the mangle all props of this object version of it to work.
Google’s Closure Library is fascinating too. It’s being retired, but if you want to build a rich text interface for email authoring that truly feels like Gmail, warts and all, you can just use a pre-compiled version of the library and follow https://github.com/google/closure-library/blob/master/closur... within a more modern codebase!
Closure is almost a forgotten child of Google now. Does not even fully support ES2022 as of today. We are working hard to get rid of it completely. Surprise, lots of important projects still rely on it today.
Oh, Closure Compiler is such a throwback. I still remember staring at the project page on Google Code. Isn't it like two decades old or even older by this point? Is it still alive?
I happen to know this because we have some old projects that depend on this and are working hard to get rid of the dependency.
I wish Google either updates it or just mark the whole thing deprecated -- the world has already moved on anyway. Relating this to Google's recent cost cutting, and seeing some other Google's open source projects more or less getting abandoned, I have to say that today's Google is definitely not the same company from two decades ago.
There was no real competition, Flow was a practical internal tool with 0 marketing budget. Typescript is typical MS 3E strategy with a huge budget. Needless to say, Flow is much more practical and less intrusive, but marketing budget captured all the newbie devs.
Have to disagree. I tried Flow irrespective of marketing and didn’t think it was polished. Kept running into type situations that the language didn’t support well. Kept bugging out in my IDE. Had no elegance.
When I last used it, as a type system it was much better than TypeScript. A lot of flow features now exist in TypeScript though too.
One big annoyance with Flow I had is like you said: unpolished tooling. Another was frequent breaking changes (I don't hold it against them too much, it was 0.x software after all)
Also because features diverged, you had to maintain type defs for multiple versions of flow for for multiple library versions. And then at one point, they also decided to convert internal errors to any types instead of displaying the error. That was the last straw for me, especially since I maintained a few flow type defs. I spent _so_ much of _my_ time just on type def maintenance for open source libraries already, with the any decay I like flying blind too. So I just switched to TS with its interior type system: it was good enough and others maintained library typedefs for me. But now the type systems are much more closely aligned (unless flow drifted), so switching to TS paid off in the end.
TypeScript was really really easy to get started with back in the day. It allows for incremental correctness, has good docs, and good tooling. On top of that a lot of beginner React tutorials started out with TypeScript, which onboarded a lot of new engineers to the TS ecosystem, and got them used to the niceties of TS (e.g. import syntax).
I don’t know what axe you have to grind, but TypeScript is firmly in the hands of the community now. There’s not much Microsoft could do to change that. In what way would it be rent-seeking?
Flow tries to be sound and that makes it infinitely better than TS where the creators openly threw the idea of soundness out the window from the very beginning.
This is a point in Flow's favour. However! Seven years ago or so, when TypeScript was quite young and seemed inferior to Flow in almost all respects, I chose Flow for a large project. Since then, I spent inordinate amounts of time updating our code for the latest breaking Flow version, until one came along that would have taken too long to update for, so we just stayed on that one. We migrated to TypeScript a little while back and the practical effect has been much more and effective type checking through more coverage and support. TypeScript may be unsound, but it works better over all. We turn on the vast majority of the safety features to mitigate the unsoundness. And it's developed by a team that are beholden to a large and vibrant user base, so any changes are generally well-managed. There's no contest, really.
I know, but it just doesn't matter enough. Believe me, I'm signed up to the idea of theoretical rigour, the argument for soundness is part of what originally won me over (along with previous good experiences with Flow on a smaller project, and the support for gradual adoption). I will continue to be drawn to languages and tools that have a strong theoretical foundation. But in this particular case, today, when comparing these particular projects in the large JavaScript codebase I am talking about, TypeScript still wins by some distance. I promise that it has caught way more errors and been more generally helpful in its language server abilities than Flow ever was. Maybe Flow has caught up since then in its core functionality, I haven't been keeping track, but there would still be the wide disparity in community support which has serious implications for developer education, availability of library type definitions, etc.
I think the real answer is adding actually sound types to JS itself.
One of the biggest revolutions in JS JITS was the inline cache (IC). It allows fast lookup and specialized functions (which would be too expensive otherwise). These in turn allow all the optimizations of the higher-tier JITs.
The biggest problem of Flow and TS is encouraging you to make slow code. The second you add a generic to your function, you are undoubtedly agreeing that it is going to be accepting more than 4 types. This means your function is megamorphic. No more IC. No more optimization (even worse, if it takes some time to hit those 5+ types, you get the dreaded deoptimization). In theory, they could detect that your function has 80 possible variations and create specialized, monomorphic functions for each one, but that's way too much code to send over the wire. That kind of specialization MUST be done in the JIT.
If you bake the types into the language via a `"use type"` directive, this give a LOT of potential. First, you can add an actually sound type system. Second, like `"use strict"` eliminated a lot of the really bad parts of JS, you can eliminate unwanted type coercion and prevent the really dynamic things that prevent optimization. Because the JIT can use these types, it can eliminate the need for IC altogether in typed functions. It can still detect the most-used type variants of a function and make specialized versions then use the types to directly link those call sites to the fast version for even more optimization.
I use TS because of its ubiquity, but I think there's the possibility for a future where a system a little more like Flow gets baked into the language.
> I use TS because of its ubiquity, but I think there's the possibility for a future where a system a little more like Flow gets baked into the language.
Have you looked into ReScript? It is basically a sound type system + JavaScript-like syntax. It inherits the type system from OCaml. You might like it. They recently released version 11.
Maybe! I see where you're coming from. That sounds like a long and painful road, still, though, from what is still a very dynamic language. Do you have a rough idea of how much more time/space-efficient a typical JavaScript program could be through this?
JS uses a JIT while Ocaml is AOT which is generally an advantage for Ocaml.
Ocaml only compiles once while JS compiles every time it runs. This means that JS is a lot more selective about its compilation, but the hot code could be every bit as fast as Ocaml. On the flip side, Ocaml is fast because it compiles method-at-a-time and a SML compiler like MLton which does a slow whole-program pass can generate significantly faster code (despite being a part-time hobby project for a few academics).
The big difference is money. Ocaml has some funding, but nothing compared to JS. It's hard to believe, but handling strings in JS is probably faster than what most devs could do themselves in C/C++. It's not because JS is inherently faster. It's because those bits are native and have had countless man-years poured into making them fast. That said, even the JIT itself is top-tier and raw integer code is only 20-50% slower than C (excluding any SIMD optimizations).
I think the upper limit for a typed JS could be about as fast and maybe a little faster than Ocaml on the JIT and maybe even a little faster with a more restrictive subset compiling to WASM.
Here's an example of TypeScript failing to be sound - it should give a type error but it doesn't. I believe Flow does indeed give a type error in this situation:
I think this is not a very good example. Not only does it also throw in TS, but it even throws in Haskell which is pretty much the poster boy for sound type systems.
This isn't a type error unless your type system is also encoding lengths, but most type systems aren't going to do that and leave it to the runtime (I suspect the halting problem makes a general solution impossible).
Yes it throws in typescript. Typescript isn't the the language chasing soundness at any cost.
This just illustrates the futility of chasing soundness.
Soundness is good as long as the type-checking benefit is worth the cost of the constraints in the language.
If the poster child for soundness isn't able to account for this very simple and common scenario, then nothing will actually be able to deliever full soundness.
It's just a question of how far down the spectrum you're willing to go. Pure js is too unsound for my taste. Haskell is too constrained for my taste.
You might come to a different conclusion, but for me, typescript is a good balance.
My balance point is StandardML. SML and TS both have structural typing (I believe this is why people find TS to be more ergonomic). SML has an actually sound type system (I believe there is an unsoundness related to assigning a function to a ref, but I've never even seen someone attempt to do that), but allows mutation, isn't lazy, and allows side effects.
Put another way, SML is all the best parts of TS, but with more soundness and none of the worst parts of TS and non of the many TS edge cases baked into the language because they keep squashing symptoms of unsoundness or adding weird JS edge cases that you shouldn't be doing anyway.
Personally, I think javascript kind of sucks, but it's approximately* the only choice for targeting browsers. If it wasn't for this face, I probably never would have touched TS. SML sounds pretty good.
Wait, so Flow is not actually sound and their website is lying? Or do they have some "technically correct" definition of "sound" that takes stuff like that into account?
Flow is not sound. They have the ambition of trying to be sound (which I appreciate), but they've never accomplished it.
I went looking for where on their website they claim to be sound. There's definitely some misleading wording here:
https://flow.org/en/docs/lang/types-and-expressions/#toc-sou...
but if you read the whole section, it ends up also acknowledging that it's not entirely sound.
This is different. Neither flow, typescript, nor javascript generate a runtime error for an out of bounds index. It's explicitly allowed by the language.
The result of the an OOB access of an array is specified to be `undefined`. The throw only happens later when the value is treated as the wrong type.
I don't consider a runtime error to be a failure of the type system for OOB array access. But in javascript, it's explicitly allowed by specification. It's a failure of any type system that fails to account for this specified behavior in the language.
This is like arguing that a null exception is fine because it's allowed by the language. If you get `undefined` when you expect another type, most future interaction are guaranteed to have JS throw because of the JS equivalent of a null pointer exception. They are technically different because a dynamic language runtime can prevent a total crash, but the effect on your web app is going to be essentially the same.
[1,2,3][4].toFixed(2)
> It's a failure of any type system that fails to account for this specified behavior in the language.
Haskell has the ability to handle the error.
How do you recommend a compiler to detect out-of-bounds at compile time? It can certainly do this for our trivial example, but that example will also be immediately evident the first time you run the code too, so it's probably not worth the effort. What about the infinite number of more subtle variants?
> How do you recommend a compiler to detect out-of-bounds at compile time?
I wouldn't make the recommendation that they do at all. Full soundness is not my thing. But... if Flow wanted to do it, it would have to change the type of indexing into `(Element[])[number]` with a read from `Element` to `Element | undefined`.
When a language's type system is sound, that means that if you have an expression with type "string", then when you run the program the expression's value will only ever be a string and never some other sort of value.
Or stated more abstractly: if an expression has type T, and at runtime the expression evaluates to a value v, then v has type T.
The language can still have runtime errors, like if you try to access an array out of bounds. The key is that such operations have to give an error — like by throwing, so that the expression doesn't evaluate to any value at all — rather than returning a value that doesn't fit the type.
Both TypeScript and Flow are unsound, because an expression with type "string" can always turn out to evaluate to null or a number or an object or anything else. Flow had the ambition to be sound, which is honorable but they never accomplished it. TypeScript announced up front that they didn't care about soundness:
https://www.typescriptlang.org/docs/handbook/type-compatibil...
Soundness is valuable because it makes it possible to look at the types and reason about the program using them. An unsound type-checker like TypeScript or Flow can still be very useful to human readers if most of the types in a codebase are accurate, but you always have to keep that asterisk in the back of your head.
One very concrete consequence of soundness that it makes it possible to compile the code to fast native code. That's what motivated Dart a few years ago to migrate from an unsound type system to a sound one:
https://dart.dev/language/type-system
so that it could AOT-compile Flutter apps for speed.
> In Python, I've even heard of people writing types in source code but never checking them, essentially using type hints as a more convenient syntax for comments.
Note that there's IDEs that'll use type hints to improve autocomplete and the like too, so even when not checking types it can make sense to add them in some places.
You can have this now adding types with JSDoc and validating them with typescript without compiling, you get faster builds and code that works everywhere without magic or need to strip anything else than comments.
The biggest pain point of using JSDoc at least for me was the import syntax, this has changed since Typescript 5.5, and it's now not an issue anymore.
For god's sake, please stop shilling JSDoc as a TS replacement. It is not. If you encounter anything more complicated than `A extends B`, JSDoc is a pain in the ass of huge intensity to write and maintain.
I’ve had a lot of success combining JSDoc JS with .d.ts files. It’s kind of a Frankenstein philosophically (one half using TS and one half not) but the actual experience is great: still a very robust type system but no transpiling required.
In a world where ES modules are natively supported everywhere it’s a joy to have a project “just work” with zero build steps. It’s not worth it in a large project where you’re already using five other plugins in your build script anyway but for small projects it’s a breath of fresh air.
I do this as well. JSDoc is great for simple definitions, but as soon as you want to do something more complicated (generics, operators, access types, etc) you get stuck. The .d.ts are ignored because you're only importing them within JSDoc comments.
You should write complex types in interfaces files where they belong, and there's full typescript support.
I use this approach professionally in teams with many developers, and it works better for us than native TS.
Honestly give it a try, I was skeptical at first.
In general JSDoc is just much more verbose and has more friction, even outside complex types. I recently finished a small (20 files/3000 lines), strictly typed JS project using full JSDoc, and I really miss the experience of using the real TypeScript syntax. Pain points: annotating function parameter types (especially anonymous function), intermediate variable type and automatic type-only import, these are the ones that I can remember. Yes you can get 99% there with JSDoc and .d.ts files, but that's painful.
Source maps and build files are automatically generated when bundling which you need to do with or without typescript… so this argument always confuses me. There is no tangible downside in my experience.. either way it’s just typing “pnpm build”.
You can’t write complex TypeScript types in JSDoc, which is what GP said.
The moment you need to declare or extend a type you’re done, you have to do so in a separate .ts file. It would be possible to do so and import it in JSDoc, but as mentioned before it’s a huge PITA on top of the PITA that writing types can already be (e.g. function/callbacks/generics)
JSDoc does not scale, but some projects are just better when they aren't scaled.
JSDoc is indeed fine on toy project, or in fact any project (even prod-ready ones) that doesn't warrant the trouble of adding NPM packages and transpilation steps.
Although they are rare, those type of small, feature-complete codebases do exists.
> If Node.js can run TypeScript files directly, then the TypeScript compiler won't need to strip types and convert to JavaScript
Node.JS isn't the only JS runtime. You'll still have to compile TS to JS for browsers until all the browsers can run TS directly. Although some bundlers already do that by using a non-official compiler, like SWC (the one Node's trying out for this feature).
> In Python, I've even heard of people writing types in source code but never checking them, essentially using type hints as a more convenient syntax for comments.
It's not just comments. It's also, like the name "type hint" suggests, a hint for your IDE to display better autocomplete options.
> It's not just comments. It's also a hint for your IDE to display better autocomplete options.
Ah yes, autocomplete is another benefit of machine-readable type hints. OTOH there's an argument that another IDE feature, informational pop-ups, would be better if they paid more attention to comments and less to type hints:
Specifically, I saw JSDoc syntax and it triggered me so much that I closed the page and threw my phone away in disgust at absurdness of even the idea that someone thought having something like this unironically is a remotely good idea.
What do you mean ugly? This basically is making Typescript official.
They just can't have browsers doing the actual type checking because there isn't a specification for how to do that, and writing one would be extremely complicated, and I'm not sure what the point would be anyway.
> In Python, I've even heard of people writing types in source code but never checking them
This is my main approach. Type hints are wonderful for keeping code legible/sane without going into full static type enforcement which can become cumbersome for rapid development.
You can configure typescript to make typing optional. With that option set, you can literally rename .js files to .ts and everything "compiles" and just works. Adding this feature to nodejs means you don't even have to set up tsc if you don't want to.
But if I were putting in type hints like this, I'd still definitely want them to be statically checked. Its better to have no types at all than wrong types.
> Its better to have no types at all than wrong types.
I agree - but the type systems of both Python and TypeScript are unsound, so all type hints can potentially be wrong. That's one reason why I still mostly use untyped Python - I don't think it's worth the effort of writing type annotations if they're just going to sit there and tell lies.
Or maybe the unsoundness is just a theoretical issue - are incorrect type hints much of a problem in practice?
Is this “unsound”-ness that you’re referring to because it uses structural typing and not nominal typing?
Fwiw I’ve been working with TypeScript for 8+ years now and I’m pretty sure wrong type hints has never been a problem. TS is a God-send for working with a codebase.
`strings` is declared as a `Array<string>`, but TypeScript is happy to insert a `number` into it. This is a contradiction, and an example of unsoundness.
`s` is declared as `string`, but TypeScript is happy to assign a `number` to it. This is a contradiction, and an example of unsoundness.
This code eventually fails at runtime when we try to call `s.toLowerCase()`, as `number` has no such function.
What we're seeing here is that TypeScript will readily accept programs which violate its own rules. Any language that does this, whether nominally typed or structurally typed, is unsound.
There's not much connection. Typescript's record types aren't sound, but that's far from its only source of unsoundness, and sound structural typing is perfectly possible.
Soundness is also a highly theoretical issue that I've never once heard a professional TypeScript developer express concern about and have never once heard a single anecdote of it being an issue in real-world code that wasn't specifically designed to show the unsoundness. It usually only comes up among PL people (who I count myself among) who are extremely into the theory but not regularly coding in the language.
Do you have an anecdote (just one!) of a case where TypeScript's lack of type system soundness bit you on a real application? Or an anecdote you can link to from someone else?
> Do you have an anecdote (just one!) of a case where TypeScript's lack of type system soundness bit you on a real application?
Sure. The usual Java-style variance nonsense is probably the most common source, but I see you're not bothered by that, so the next worst thing is likely object spreading. Here's an anonymized version of something that cropped up in code review earlier this week:
I mean... yes, there's a footgun there where you have to know to spread first and then add the new properties. That's just a good practice in the general case: an intermediate type that fully described the data wouldn't have saved you from overwriting it unless you actually looked closely at the type signature.
And yes, TypeScript types are "at least these properties" and not "exactly these properties". That is by design and is frankly one reason why I like TypeScript over Java/C#/Kotlin.
I'd be very interested to know what you'd do to change the type system here to catch this. Are you proposing that types be exact bounds rather than lower bounds on what an object contains?
> That's just a good practice in the general case: an intermediate type that fully described the data wouldn't have saved you from overwriting it unless you actually looked closely at the type signature.
The issue isn't that it got overridden, it's that it got overridden with a value of the wrong type. An intermediate type signature with `updatedAt` as a key will produce a type error regardless of the type of the corresponding value.
> I'd be very interested to know what you'd do to change the type system here to catch this.
Like the other commenter said, extensible records. Ideally extensible row types, with records, unions, heterogeneous lists, and so on as interpretations, but that seems very unlikely.
Look into "Row types" and how PureScript, Haskell, and Elm (to a limited extent) do it.
'{foo :: Int | bar} is a record with a known property 'foo' and some unspecified properties 'bar'. You cannot pass a `{foo :: Int, bar :: Int}` into a function that expects `{foo :: Int}`.
A function that accepts any record with a field foo, changes foo, keeping other properties intact has the type
Ah someone else posted a link and I understand the unsoundness now.
The only time an issue ever came up for me was in dealing with arrays
let foo: number[] = [0, 1, 2]
// typed as number but it’s really undefined
let bar = foo[3]
But once you’re aware of the caveat it’s something you can deal with, and it certainly doesn’t negate the many massive benefits that TS confers over vanilla JS.
Yeah, that example is unsound in the same way that Java's type system is unsound, it's a compromise nearly all languages make to avoid forcing you to add checks when you know what you're doing. That's not the kind of problem that people usually are referring to when they single out TypeScript.
I've been using TypeScript professionally for 6+ years and have only ever run into issues at the border between TypeScript and other systems (usually network, sometimes libraries that don't come with types). There are a few edge cases that I'm aware of, but they don't really come up in practice.
Or you can configure the TS compiler to allow JS imports, then everything also compiles and works, but you can slowly convert your codebase from JS to TS file by file and be sure that all TS files are properly typed and all JS files are untyped instead of having everything as TS files where some are typed and some are not.
Yeah I start projects by explicitly typing `any` all over the place and gradually refining things, so every type that's specified is explicit and checked, I'm really enjoying that style.
Combine this with an eslint config that nudges you about explicit any, and the typescript compiler option to disallow implicit any, and you're well taken care of.
While it’s not common (from the source code I’ve reviewed over the years), some people make a new type with a name and use that in the definition:
```
from typing import NewType
# Create a new type for some_prime
SomePrime = NewType('SomePrime', int)
def process_prime(value: SomePrime) -> int:
return value
```
However, this isn’t nearly as common as simply using a more descriptive argument name like “prime_number : int”
One of the big advantages to type hinting in Python is that it feeds the IDE a lot of information to increase auto-complete functionality, so you want to avoid things like p:”prime number”
> One of the big advantages to type hinting in Python is that it feeds the IDE a lot of information to increase auto-complete functionality
Yeah, until this discussion I thought the main benefit of type hints was earlier detection of bugs via static checking. Now though, I'm getting the impression that the bigger benefit is enabling IDE features such as autocomplete.
That helps me understand better why I haven't found type hints as useful as others seem to - I don't use an IDE. My use of Python is limited to small scripts which I write in a simple text editor.
Exactly. And if you use a library that does lots of meta programming (like Django) then it's impossible to pass all type errors. Hopefully one day the type system will be powerful enough to write a Django project with passing tests.
I don't find TypeScript to be burdensome when rapidly iterating. Depending on how you've configured your dev environment you can just ignore type errors and still run the code.
Incidentally, this is how the ecmascript proposal for introducing types to JS would work by default. The runtime would ignore the types when running code. If you want type checking, you’d have to reach for external tooling.
If this feature ever becomes the default (ie not behind a flag) - how will the NPM ecosystem respond? Will contributors still bother to build CJS end EJS versions when publishing a NPM module, or just slap an 'engine: nodejs >= 25' on the package.json and stop bothering with the build step before pushing to NPM ?
I personally would very much prefer if NPM modules that have their original code in TS and are currently transpiling would stop shipping dist/.cjs so I unambiguously know where to put my debugger/console.log statements. And it would probably be very tempting to NPM contributors to not have to bother with a build step anymore.
But won't this start a ripple effect through NPM where everyone will start to assume very quickly 'everyone accepts TS files' - it only takes one of your dependencies for this effect to ripple through? It seems to me that nodejs can't move this outside an opt-in-experimental-flag without the whole community implicitly expecting all consumers to accept TS files before you know it. And if they do, it will be just months before Firefox and Safari will be force to accept it too, so all JS compilers will have to discard TS type annotations
Which I would personally be happy with - we're building transcompiling steps into NPM modules that convert the ts code into js and d.ts just to support some hypothetical JS user even though we're using TS on the including side. But if node accepts .ts files we could just remove those transpiling steps without ever noticing it... so what's stopping NPM publishers from publishing js/d.ts files without noticing they broke anything?
The legendary Ryan dahl is actually working on solving the exact problem you described by creating a new package registry called JSR.
Essentially what it does is allow you to upload your typescript code without a build step so when other devs install it they can see the source code of the module in it's original typescript instead of transpiled JavaScript.
That's really cool. One of the benefits of the JS ecosystem is the ability to step through code and crack open your dependencies. Not sure if this would directly make this possible when running your projects/tests, but it at least sounds like a step in that direction.
For the old libraries I maintain that are typescript and transpiled into .cjs and .mjs for npm, I'll probably just start shipping all three versions.
For a new thing I was writing from scratch, yeah, I might just ship typescript and not bother transpiling.
[edit: Apparently not. TS is only for top-level things, not libraries in node_modules according to the sibling comment from satanacchio who I believe is the author of the PR that added TS support and a member of the Node.js Technical Steering Committee]
Because I don't like breaking things unnecessarily. Some of my libraries are 10 years old and depended upon by similarly old projects that are not using ESM and probably never will.
Besides, it's already going through one transpilation step to go from TS to ESM, so adding a second one for CJS really isn't that much hassle.
I think if node.js had made require() work with ESM, I could probably drop CJS. But since that's probably never going to happen, I'm just going to continue shipping both versions for old projects and not worry about it.
> I think if node.js had made require() work with ESM, I could probably drop CJS
Why is making downstream have to switch to `await import()` that big of a deal?
You can use async/await in CJS just fine. Sure, sometimes you may need to resort to some ugly async IIFE wrappers because CJS doesn't support top-level await like ESM does, but is that really such a big deal?
Sure, it's a breaking change, but that's what semver major bumps are for.
I just think that if projects want to stay in CJS they should learn how to use async/await. I clearly don't understand why CJS libraries feel a need synchronous require() for everything. (Though to be fair, I've also never intentionally written anything directly in CJS. I learned enough in the AMD days to avoid CJS like a plague.)
> Why is making downstream have to switch to `await import()` that big of a deal?
> You can use async/await in CJS just fine. Sure, sometimes you may need to resort to some ugly async IIFE wrappers because CJS doesn't support top-level await like ESM does, but is that really such a big deal?
It might seem like a small amount of work, but for a library one must to multiply that small amount of work by the number of users who will have to repeat it. It can be a quite large amount in aggregate. And, for what benefit? So I can drop one line from my CI config? It just seems like a huge waste of everyone's time.
Also, as a library user, I would (and occasionally do) get annoyed by seemingly unnecessary work foisted on my by a library author. It makes me consider whether or not I want to actually depend on that library, and sometimes the answer is no.
> multiply that small amount of work by the number of users who will have to repeat it
This is probably where we have the biggest difference in our calculations. I know there's a lot of pain in legacy CJS systems, but from my view (which is maybe more "browser-oriented", which is maybe a bit more Deno/Bun-influences, which comes from a "Typescript-first" mentality going way back to 0.x) it is more legacy "giant balls of mud" maintained by a sparse few developers. I don't see this multiplicand as very big on the scale of library user count. Most CJS for years and years has been transpiled from Typescript or Rollup; most CJS only exists to be eaten by Webpack or other bundler, many of which today rewrite CJS to ESM anyway. From what I see a lot of CJS seems either transpiled out of habit (for some notion of supporting Node < 10 that doesn't make sense with current security support) or by accident (by a misconfigured tsconfig.json, for example, and then often looping back through a transpiler again back to ESM). The way we cut through the Gordian knot of we're all doing too much transpilation to/from CJS is to start eliminating automated transpilation to CJS in the first place. Which is why I find it useful every time to ask people what they are really trying to do when transpiling to .cjs today.
Of course, if your multiplicand is lines-of-code impacted, because I agree there are some great big huge piles of mud in CJS that are likely stuck that way for lack of developers/maintainers and lack of time/budget/money, then worrying about the minority of users still intentionally using CJS is worth caring about, and my sympathies in that situation.
You, of course, know your library's users better than me and maybe you do have a lot of CJS users that I just wouldn't consider in my calculations. I'm not going to stop you from transpiling to CJS if you find that necessary for your library. That's your judgment call. I just wanted to also make sure to ask the questions of "do you really need to?" and "how many users do you actually think it will impact?" out loud. Thanks a lot for the conversation on it, and I'm still going to be a radical banging the "CJS Must Die" drum, but I understand pragmatism and maintenance needs, especially those of legacy applications, and mostly just want to make sure the conversation is an active one and a lot less of passively transpiling stuff that doesn't really need it.
I'm a game developer. I make web games. We run our games with a simple nginx server that simply serves the wasm. We have some JavaScript libraries we use. They have to be raw dog .js.
I don't even know what your "ejs" or "cjs" acronyms mean.
We use the discord JavaScript SDK. Discord only ships it as a node module or as .ts.
It's a pain in our ass to update because we don't know what those tools your talking about are and we don't want to know. Just give me the damn .js
I'm on your side. No one should have to care about the difference between ESM (.mjs) and CJS (.cjs). CJS should just be dead and we only need one .js again. If you are following the Discord JS docs and using modern JS syntax `import` and `export` statements (if you are "raw dogging" it, have you heard the good word of <script type="module">?) then none of the above conversation applies to you, congratulations! That's the winning modern JS and you are one of the majority of users using it. The conversation above is about the old dead JS (CJS) and why people are still outputting the old dead JS today when it doesn't matter to people like you or I that just want plain modern .js files (and .ts files and simpler tsconfigs).
i wish it was that easy. i keep trying to default to esm on node projects but the ecosystem is not there yet, at least in the context of server side nodejs stuff.
It's one reason to consider a switch to Deno or Bun, but at least to maybe at least check JSR before NPM these days. The ecosystem for server-side ESM exists and is growing at a rapid pace, but it's sometimes hard to separate the active maintained npm packages from the legacy ones in the server-side ecosystem. (Browser ecosystem is definitely more ESM hungry.)
That said with "type": "module" in my own package.json files, I've so far never had a problem importing legacy CJS npm packages into ESM, other than the Types are more likely to be wrong (because packages that only publish CJS are more likely to also not publish their own types) or at least inaccurate for the current import approach (returning only "synthetic default" instead of individual exports, for example). That's a bunch of papercuts having to attempt multiple imports until you understand what shape Node is giving you of that CJS import, but after those papercuts I feel like interop is generally smooth sailing in today's Node.
Interesting. Every example that I've seen of React SSR is all natively in ESM, but I've mostly only glanced at the big ones (Astro/Next/Nuxt). I'm sure that there are a lot of paths that aren't as well paved given how diverse the React SSR space currently is (there are way too many competing options), and how simple you want to try to keep your SSR (a lot of those options are just so complex today, which presumably is why there are so many competing options, everyone has a different idea of how complex to make the whole thing).
(My own .tsx based view library doesn't yet have official SSR support, but I do all my testing in ESM in the built-in Node test runner `node --test` so I don't see any complications in doing .tsx on the "server-side", because that is how I'm testing everything already, I just haven't entirely figured out my "hydration" or "islands" or "stamps" approach so I don't officially support SSR yet. It's on the roadmap and I've made small bits of progress towards it, just need to solve it and haven't had the time/priority.)
I would love to ship my source code (.ts) to npm. But Typescript team was very much against this, as there'll be tsconfig issues and other performance issues. But still fingers crossed.
Eventually, node might allow JS to introspect those types.
That would be a huge win. Right now in Python, great tools like pydantic exist because Python can introspect said types, and generate checks out of them.
This mean you can define simple types, and get:
- type checking
- run time data check
- api generation
- api document generation
Out of a single, standard notation.
Right now in JS, things like zod have to do:
const mySchema = z.string();
Which is basically reinventing what typescript is already doing.
That's not entirely true. `z.string()` in Zod offers more than just type safety akin to TypeScript. TypeScript provides compile-time type checking, while Zod adds runtime validation and parsing.
For those unfamiliar:
`z.string()` effectively converts `mySchema` into a functional schema capable of parsing and validation.
I've used it in places where you need runtime validation and in process verification - it works pretty well for that and you can extract the types from it via :
const A = z.string();
type A = z.infer<typeof A>; // string
Meaning if you define your types in zod first, and infer their types from that you get compile and runtime type checking.
---
It a bit of an overkill for nimble and fast code bases though - but works wonders for situations where in process proofing needs to be done, and in all honesty it isn't that big of a task to do this.
> Zod offers more than just type safety akin to TypeScript. TypeScript provides compile-time type checking, while Zod adds runtime validation and parsing.
Well of course it offers more, or you wouldn't be installing a library.
The problem is that even when you're expressing normal Typescript types, you have to use entirely different syntax. It's good that you can usually avoid double-definition, but it's still a big barrier that shouldn't be necessary.
A lot of the focus by the TypeScript team is focused on alignment with the JavaScript language these days and novel new features such as run time types have all but been dismissed or at the very least pushed behind the TC39 JavaScript types proposal. Much like using decorators on variables outside of class structures was.
Having said that, TypeScript allows plugins, these are very rarely used as they augment the language by introducing other features that are transformed into the resulting JavaScript files. One plugin that relates to your suggestion of run time types is called Typia, it permits you to use your TypeScript type signatures at runtime with guards like `assert<MyType>(myValue)` where it intercepts the function call to construct an exhaustive if statement in the transpiled JavaScript checking the nature of the passed variable.
So while I don't see it being a part of the language in the next four to six years, there are at least libraries out there already that allow you to do it today.
If JS ever adds type checking, I hope it doesn't choose Typescript.
We need a type system that is actually sound and TS is intentionally unsound. We need a type system that doesn't allow bad coding practices like TS does. We need a type system that enforces program design that allows programs to be fast. We need a Hindley Milner type system.
If you want a module to be typed, add a `"use type"`. This should disallow bad parts of the language like type coercion. It should disallow things that hurt performance like changing object shape/value type or making arrays of random collections of stuff. Incoming data from untyped modules would either coerce or throw errors if coercion can't be done at which point the compiler can deeply-optimize the typed code because it would have far stronger type guarantees and wouldn't have a risk of bailing out.
having written ocaml in production for a few years, i think soundness comes at a cost of dev ergonomics. at least with the type systems of today’s industry languages.
it blows my mind weekly how ergonomic and flexible typescript’s type system is. it allows me to write great apis for my team mates.
is it possible for the type checker to end up in an infinite loop or for a junior developer to abuse “as”? absolutely, but it doesn’t really matter in practice.
i wouldn’t want to run typescript in rockets or submarines tho!
Ocaml types are USED by the compiler to generate FAST code.
TS types are IGNORED by the JIT to generate SLOW code.
All the features that make Typescript more ergonomic for devs also allow it to generate slower JS code. AssemblyScript tries to be TS for WASM and it doesn't support huge swaths of TS because they output unusably slow garbage.
I also suspect that more than a few Ocaml ergonomic issues are due to nominal typing. StandardML's structural typing and inference give an experience very similar to TS, but without the major soundness issues (though it must be noted that ANY generics in JS will be slow unless the compiler is creating multiple function variants).
for my use case (web dev) ts is fast enough. but i do miss the ocaml compile times!
it's mainly the lack of ad-hoc polymorphism that makes ocaml feel a bit clunky to me at times. but structural typing sure would be nice.
i used to avoid typescript because of similar soundness issues. but in the context of web dev this weird type system that evolved from adding types to javascript turned out to be so nice to use. it's bonkers because on paper it shouldn't be this nice haha.
Ocaml messed up with their operators. StandardML had a better approach and I hope a future version adds module typeclasses (to help limit the type soup we see in Haskell).
As I wrote elsewhere in this thread, TS makes it incredibly easy to unintentionally make megamorphic functions that don’t have any inline cache and don’t get optimized at all. You think you’re writing efficient, DRY code, but it’s really just dog slow because you’ve neutered the JIT.
> Ocaml messed up with their operators. ... I hope a future version adds module typeclasses
Do you mean modular implicits?[1] The original paper was published in 2014. There's an internship report from 2023 that summarizes the design and implementation issues:
TS allows you to pass a read-only object to a method taking a read-write value:
type A = { value: number; }
function test(a: A) { a.value = 3; }
function main() {
const a: Readonly<A> = { value: 1 };
// a.value = 2; <= this errors out
test(a); // this doesn't error out
console.log(a); // shows 3
}
If there's a bad way to write JS, TS has something available to make sure it's typed.
Does TS help you keep your functions monomorphic so they'll get optimized by the JIT? nope
Does TS keep your object shape from changing so it will get optimized by the JIT? it actively does the opposite giving TONS of tools that allow you to add, remove, modify, and otherwise mess up your objects and guarantee your code will never optimize beyond the basic bytecode (making it one or two orders of magnitude more slow than it could otherwise be).
TS doesn't do anything to prevent or even discourage these kinds of bad decisions. They "type soup" many projects fall into is another symptom of this. The big reason the types become such a mess is because the underlying design is a mess. Instead of telling programmers "fix your mess", TS just releases even more features so you can type the terrible code without fixing it.
> Does TS keep your object shape from changing so it will get optimized by the JIT? it actively does the opposite giving TONS of tools that allow you to add, remove, modify, and otherwise mess up your objects and guarantee your code will never optimize beyond the basic bytecode (making it one or two orders of magnitude more slow than it could otherwise be).
Can you elaborate or point to some of the tools? So I know what tools I may need to avoid
JS JITs use something called an inline cache (IC) to speed up the lookup of object shapes. JS JITs consider it to be a different shape if the keys are different (even if just one is added or removed), if the values of the same key are different types, and if the order of the keys change.
If you have a monomorphic function (1 type), the IC is very fast. If you have a polymorphic function (2-4 types), the IC function gets quite a bit slower. They call 5+ types megamorphic and it basically foregoes IC altogether and also disables most optimizations.
TS knows how many variants exist for a specific function and even knows how many of those variants are used. It should warn you when your functions are megamorphic, but that would instantly kill 90% of their type features because those features are actively BAD in JS.
Let's illustrate this.
interface Foo {
bar: string | string[]
baz?: number
blah?: boolean
}
Looks reasonably typical, but when we use it:
function useFoo(foo: Foo) { .... }
useFoo({bar: "abc", baz: 123, blah: true}) //monomorphic
useFoo({bar: "abc", baz: 123}) //now a slower polymorphic
useFoo({bar: "abc"})
useFoo({bar: ["b"], baz: 123})
useFoo({bar: ["b"], baz: 123, blah: true}) //we just fell off the performance cliff
As you can see, getting bad performance is shockingly easy and if these calls were across five different files, they look similar enough that you'd have a hard time realizing things were slow.
Union/intersection aren't directly evil. Unions of a single type (eg, a union of strings) is actually great as it offers more specificity while not increasing function complexity. Even if they are a union of different primitive types, that is sometimes necessary and the cost you are paying is visible (though most JS devs are oblivious to the cost).
Optionals are somewhat more evil because they somewhat hide the price you are paying.
[key:string] is potentially evil. If you are using it as a kind of `any`, then it is probably evil, but if you are using it to indicate a map of strings to a type, then it's perfectly fine.
keyof is great for narrowing the possible until you start passing those keys around the type system.
Template unions are also great for pumping out a giant string enum (though there is a definite people issue of making sure you're only allowing what you want to allow), but if they get passed around the type system for use, they are probably evil.
Interface merging is evil. It allows your interface to spread across multiple places making it hard to follow and even harder to decide if it will make your code slow.
Overloads are evil. They pretend you have two different functions, but then just union everything together.
Conditional types are evil. They only exist for creating even more complex types and those types are basically guaranteed to be both impossible to fully understand and allow very slow code.
Mapped types are evil. As with conditional types, they exist to make complex an incomprehensible types that allow slow code.
Generics are the mother of all that is evil in TS. When you use a generic, you are allowing basically anything to be inserted which means your type is instantly megamorphic. If a piece of code uses generics, you should simply assume it is as slow as possible.
As an aside, overloads were a missed opportunity. In theory, TS could speed everything up by dynamically generating all those different function variants at compile time. In practice, the widespread use of generic everything means your 5mb of code would instantly bloat into 5gb of code. Overloads would be a great syntax to specify that you care enough about the performance of that specific function that you want to make multiple versions and link to the right one at compile time. Libraries like React that make most of their user-facing functions megamorphic could probably see a decent performance boost from this in projects that used TS (they already try to do this manually by using the megamorphic function to dispatch to a bunch of monomorphic functions).
If only projects like Bun/Deno/Node added runtime support for ReScript instead of TypeScript, collectively as the web-tooling industry, we'd be in a better place. But you can't win against the MS's marketing budget.
Also in hindsight, ReScript diverged away from OCaml, but the ReScript development team could have gone further by creating a runtime for ReScript. Then again I don't blame them - they are polishing the dev experience of ReScript and React.
This is the decade of writing shiny new runtimes - I hope somebody writes a ReScript runtime. Imageine ReScript, Core, rescript-webapi, typechecker, re-analyze, plus a bundler minifier etc baked into the runtime like Bun. Sounds like an interesting value proposition. Fingers crossed.
Does this mean Node can know if an exception is subclass of ValueError or an object is instance of SomeClass? I'm a TS newb, I thought types outside of array, object, number, string arent present in JS and Zod and typeguard functions return plain objects with "trust me bro".
In JS, classes do retain runtime information. So the `instanceof` is a real runtime operator that works by checking the prototype chain of an object. So checking subclasses can be done at runtime.
However, in TS other type information is erased at compile time. So if you write
type Foo = "a" | "b";
the runtime code will see that just as a plain string.
You are right, they aren't. In the JavaScript languages which is what gets actually executed, there are no typescript types.
The parent commenter was talking about a way for nodejs to provide, via an API, the content of type annotations on fields/functions/variables like in python.
However, in python the type annotations are a property of the object at run time, whereas they are completely stripped before execution for typescript.
So I'm not sure how it would work except by changing the typescript philosophy of "not changing runtime execution"
Bun’s DX is pretty unprecedented in this space, and most of my use cases are now covered / not causing Bun to crash (when actually using run-scripts with `bun run`).
Meanwhile, I can’t configure node to not require extensions on import, nor have tsc configured to automatically add .js extensions to its compiled output, without adding on a bundler… although native TypeScript support would remedy this nit quite a bit, I can’t imagine the user experience (or performance) to match Bun’s when it reaches stable.
Extensions should be required. It's not possible to do path searches over the network like you can on local disk, and network-attached VMs, like browsers, are a very, very important runtime for JavaScript.
Fortunately, code is generally bundled for browsers to reduce the number of network requests and total size of downloads. And node has access to the filesystem, so it can do path searches just fine if it wants to support existing code.
You probably don't need a bundler in the browser anymore. We're not yet to the point that is a popular "mainstream" opinion, but between massive improvements in browser connection handling (HTTP 1.1 connection sharing actually works in more places, HTTP/2+) and very good ESM support in browser's well optimized preloaders and caching engines (which can sometimes reduce download size much better than all-or-none bundles can, sure the trade-off is network requests but we are in a good place to take that trade-off), we're at an exciting point where there is almost never a need to bundle in development environments, and it is increasingly an option to not bundle in production either. It is worth benchmarking today (I can't tell you what your profiler tools will tell you) if you are really gaining as much from production bundles as you think you are. Not enough people are running those benchmarks, but some of them may already be surprised.
The Developer Experience of unbundled ESM is great. Of course you do need to do things like always use file extensions. But those aren't hard changes to make, worth it for the better Developer Experience, and if it can help us start to wean off of mega-bundler tools as required production compile time steps.
Meh. Even with h3 I still see more gains from reducing network requests than most other attempts I try. (One day s3 will support multiple ranges per request, if I wish hard enough).
I like Bun a lot, but Deno is (still) the more mature, stable, capable (e.g. stable workers, http2) and depending on the use-case more performant option (V8 > JSC). DX and tooling is top-notch. Deno can perform typchecking, btw. They bundle the TSC IIRC. Bun is the hype, but Deno is currently clearly the better option for serious endevours. Still, the vision and execution of Bun is impressive. Good for us devs.
Bun started with compatibility with NodeJS as a primary goal, whereas for Deno it took a while to be able to import npm stuff. (Of course there are fun WTF[0] errors with Bun, and I only tried Deno before the npm import feature landed.)
Jokes apart, Zig is moving forward a lot which is why it's not 1.0 yet, but it doesn't mean you can't write safe and performant applications right now.
Zig is also a rather simple and straightforward language (like C) and has powerful compile-time code generation (like C macros, but without the awful preprocessor).
I'm more worried about compilation or stdlib bugs. In theory you can do lots of things with lots of things, but in practice there are all sorts of hidden limitations and bugs that tend to be noticed once a software product is past 1.0 and has been out in the wild for half a decade or more.
You still get segmentation faults. My biggest complain with bun is not having enough safety.
If you use frameworks written for node memory usage is very high and performance is meh.
If you use frameworks written for bun they smoke anything on node.
I'd definitely move over, just to get rid of the whole TypeScript / cjs / esm crap, but:
1. frontend support is poor (next.js / solid.js - I can't run anything fully on bun)
3. I still need to rewrite my backend app from a node.js framework to a bun one
4. for backend development the javascript ecosystem is losing the crown: if I wanted something safe I'd just write it in Rust (TS allows any random developer to write crap with any in it and it validates), if I'm doing something AI related I'd probably need python anyway and fastapi is not half bad
Given the context of Node here will allow experimental type-stripping and will not be doing things like import rewriting, Typescript's decision here to focus on "users write .js in imports because that's how the type-stripped file should look" seems like the right call to me. Less work for a type-stripper because Typescript can already check if there is a .ts or .d.ts file for you if you use .js imports everywhere.
I really enjoy typescript and have been yearning for a typescript runtime but I can't help but laugh that I left java all those years ago to finally seek something a lot closer to java.
I guess we all just wanted java with JIT, more feature rich type system and gradual typing. Also for all the shortcomings of npm ecosystem, it is a lot less daunting and more fun to be using libraries in this ecosystem.
And surprisingly even though rust is on a different end of the language spectrum but yet it offers a similar feel.
Edit: JIT was not the right terminology to use. I lazily wrote JIT. Apologies. What I meant to convey was the difference in startup times and run time between running something in JVM and V8. Java feels heavy but in javascript ecosystem it feels so nimble.
Java was literally the thing that made the term "JIT" popular, so I really don't know what you were going for here.
Also I just can't see how Typescript is in any way "closer" to Java - it's incredibly different IMHO. The only thing they have in common is probably the "Javascript" misnomer and the fact both support imperative programming, but that's it.
Typescript’s optional and unsound type system also does nothing for a JIT beyond what it could already do for JavaScript, you can’t do optimization if your types are unreliable. However, I really really like how Typescript’s type system super charges developer productivity (type errors via the compiler and feedback via the IDE), and don’t mind this part of the design at all.
Better for what? Quickly churning out short-lived code to get the next round of funding, definitely. Writing (and _supporting_) "serious" projects over the long term, which also require high performance and/or high scalability, and can rip through terabytes of data if needed, definitely not. (All IMHO from lots of personal experience.)
I'm actually in this exact position right now. The vast majority of the time I write in TS but I have a need to process a whole lot of data so I went for Rust instead. Java is too much of a headache for me, personally
Depends on your architecture. For scaling out rather than up, node and python are both far more performant because the footprint of minimum viable environment is much smaller. When you need to serve anywhere from 10-200,000 requests a minute on the same system quickly, and efficiently, lambda/azure functions/google app engine backed by node or python is pretty ideal.
As an example, when my org needs to contact folks about potential mass shooter events, our SLA is 90 seconds. If we did it in cloud with java or .net, it'd be too slow to spin up. If we did it on prem, we'd be charged insane amounts just for the ability to instantly respond to low frequency black swan events, or it'd be too slow. This is a real story of how a Java dev team transitioned to using node for scale in the first place.
Unlike Spring, JIT-based ASP.NET Core deployments spin up very fast (<2-5s for even large-ish applications, the main bottleneck is how fast it can open connections to dependencies, load configuration, etc.). For AOT variant, the startup time is usually below 200ms if we don't count the slowness of surrounding infra which applies to any language.
Of course CPU and RAM per request when compared to Node.js are not even close as Node is easily slower by a factor of 2-10.
Flexibility means for me more something more like, I think I know what I want to do but I also know that I'm probably wrong about that, so for now let's skip all the baroque protocol and let me make it work first. Once I'm sure I wrote what I actually wanted I'll add types if only to get rid of some bugs, consider edge cases and earn nice code completions and auto-generated docs.
I'm very glad to use typescript over java, personally - the ergonomics are so much better! Especially if you stray away from the somewhat incomplete classes thing (type support for decorator arguments isn't great, for instance) and just focus on interfaces and functions.
One thing I miss that java has is runtime reflection of types though. Typescript's ecosystem has a million different ways to get around that and they're all a bit ugly imo.
It's not really a syntax bloat, the linked docs mention how to define strict string types and elaborate on type-level programming, something that is a very rare and powerful type-level capability.
As far as I understand Virtual Threads aren't type oriented feature, which is basically the context for this thread.
I don't use them directly much, but template literal generic and contidiontal types is probably the closest a mainstream language has inched towards dependent types.
1. *Type Inference*: TypeScript can automatically infer types from context, reducing the need for explicit type declarations.
2. *Union and Intersection Types*: Allows combining multiple types, offering more flexibility in defining data structures.
3. *Literal Types*: TypeScript supports exact values as types (e.g., specific strings or numbers), which can be useful for more precise type-checking.
4. *Type Aliases*: You can create custom, reusable types, enhancing code clarity and maintainability.
5. *Interfaces and Structural Typing*: Interfaces allow for flexible contracts, and TypeScript uses structural typing, where the type compatibility is based on the shape of the data rather than explicit type declarations.
6. *Mapped and Conditional Types*: These allow for dynamic type creation and manipulation, making the type system more powerful and expressive.
7. *Optional Properties and Strict Null Checks*: These provide better handling of undefined and null values.
TL;DR: Typescript is unsound so it can add a lot more type-level features that would make a sound type system undecidable
Conceptually no, almost every useful union type can be easily converted to a sum type. In my opinion the difference is in the ergonomics and in the implicit structural subtyping.
For example a common union type is number|string, and the beatiful part is that to use a value of such a type you do not need to do any matching or mapping you can just use the value as it does not have a runtime wrapper, for example (x:string|number)=>JSON.stringify(x) works perfectly fine.
Also you can have a function that takes as input a Array<string>|number|null and returns a string|number without having to declare different contructors for the input number type and the output number type
I believe that you can essentially implement this behaviour by generating enough typeclasses in Haskell, but regardless of the feasibility it, likely, would not be a good idea.
An example of something in between union types an Hindley–Milner sum types are Ocaml's polymorphic variants types https://ocaml.org/manual/5.2/types.html#sss:typexpr-polyvar that are (I believe) more advanced than TS unions but also a lot less ergonomic to use.
And TS has much more eg intersection types you could have a function with type
(x:number)=>string & (x:string)=>number
meaning that it is both a function that maps number to strings and strings to numbers (again you can do this with typeclasses but it is a worse experience)
typescript also has very good support for value types for example there is the string type but also the "hello" type which is the type of only the string "hello"
All in all if someone told me that they implemented typescript in haskell typeclasses I would not call bullshit on them, but I would not believe that anyone would actually use it for anything
You cannot write code that will fail to compile `theEntryMethod(null)` unless you only use primitive types. (You can, of course, make that method fail at runtime, but that's not what's being talked about here).
Looking at that it's just what a default POJO (with nullable properties) already is, so I'd see no need to represent that in Java.
Looks cool though and I like Typescript; my issue with it is that it needs transpiling to run. If it was a first-class citizen in an environment I would use it for my pet projects.
Yep - all non-primitive types in Java are `TheType | null` - TypeScript actually allows you to strip out the `| null`, which then means that sometimes you want to add it back in. So Java doesn't have a need for `Partial<T>`, it has a need for `NonNull<T>` and it can't express that at the type system level very easily right now (you can do it with type tagging and runtime checks inserted explicitly, but it's not very ergonomic right now)
I'm somewhere here as well. Personally I think what I want is the stdlib (without the current legacy/ all but deprecated bits) and ecosystem of c# but with the ease and power of structural algebraic types. AoT is fine, with option for single binary. Ideally runtimeless with clever trimming. If it also ran jitted in the browser all the better.
I also want compiler/type checker niceties like exhaustive pattern matching.
Greater than 95% of the incompetence in JavaScript comes from two camps. The first of those are people who absolutely cannot program at all. The second of those are Java developers who were taught Java in school and it’s all they can do, so everything must look like Java.
The result of both tribes is pretending to do something they cannot do on their own. When you’re a pretender vanity becomes excessively important because everything is superficial, so you get layers of shit you don’t need that they cannot live without. Any attempts slice off the unnecessary bullshit always results in hyper emotional distress because people feel threatened when exposed. That right there is why I will never write JavaScript for employment ever again.
Java's type system was just very limited, gradual typing is a poor tradeoff most of the time. I used to think there were advantages to something like Python, but once I found Scala I never went back.
Yes, JIT was not the right terminology to use. I lazily wrote JIT. Apologies. What I meant to convey was the difference in startup times and run time between running something in JVM and V8. Java feels heavy but in javascript ecosystem it feels so nimble.
That's not what I am trying to convey here. JVM is amazing and it is a feat that java is as fast as it is and javascript and v8 are order of magnitude slower.
Also even though I also found java too verbose, I kept believing that we need it to be so to write good software. I still enjoy java but it doesn't compare to the ergonomics of typescript for me. And nimbleness of the experience according to me plays a decent role.
Currently for me, either I really care about performance and I default to rust for those applications or I need solutions where the product will evolve quickly over time and I need great DX over performance and I default to typescript for those.
Java definitely has a role to play but its role in my work has certainly diminished.
You're saying it like it's an absolutely good thing. Some (many?) users would rather pay the cost upfront in compilation time (doesn't really matter if it's AOT or JIT) than pay the same cost many times over through a significantly slower runtime. JVM also scales up to supercomputers (and everything in between) if you want it to, so depending on your requirements a single-threaded alternative might not even be an option.
Gradual typing is the key. The problem with Java is that types are in your face way before you actually need them.
With TS you can prototype with JS and only after you know what you are looking for you can start to add types to find bugs and edge cases and want to get nice code completions for your stuff.
Gradual typing could still keep some static guarantees if the static part were sound, e.g. you couldn't assign a dynamic-typed integer to a string-typed variable without checking the type at runtime first; which TypeScript isn't.
Elixir's new type system does much better here, as it determines whether a function actually guards for the right type at runtime ("strong arrows") and propagates the guarantees, or lack thereof, accordingly.
The fact that Java forced you to write types, and then made everything implicitly nullable so that you still get NullPointerExceptions at runtime after writing out all those types, was probably a big reason why dynamically-typed languages became popular.
The type system is a big part of what made Java cumbersome. It's loosened up a little over the years. TS itself may allow partial typing, but when team/company policies are involved, you'll often end up being forced to type everything.
My favorite deno feature is coming to node directly. Awesome!
Maybe this means I don't always have to install esbuild to strip types - very excited how this will make writing scripts in TypeScript that much easier to use. I lately have been prefering Python for one off scripts, but I do think personally TypeScript > Python wrt types. And larger scripts really benefit from types especially when looking at them again after a few months.
Seconded again. While tsx usually just works ts-node almost never just works. tsx is perhaps unfortunately named though so it may confuse people at first since it has nothing to do with jsx syntax.
This seems very interesting approach to scripting. Does it basically provides with an alias to child_process.exec as $ and besides that I can write in the same way I'd do in Node?
> Node.js standard library requires additional hassle before using
I read the hassle as having to setup Node runtime in advance, but zx requires npm to install so I'm not sure.
Deno has so many other great features. Most web standard APIs are available in Deno, for example. It can do URL imports. It has a built in linter, formatter, and test framework. Built in documentation generator. A much better built in web server.
Node is copying many of these features to varying degrees of success. But Deno is evolving, too.
You want to be forced to use a centralized registry? I don’t know. URL imports also enable fully isomorphic modules. I think you would enjoy the freedom of URL imports if the ergonomics were better. For example, it should just default to https:// so you don’t have to type that. Import maps also help a lot with this, definitely use them. But they could be even better by having first-class support for templating the module version into the URL so that the version can be stored separately, alongside the module name. Popular hosts with well-known URL structures could have their URLs automatically templated so you only have to specify the host and not the rest of the URL.
In other words, the tooling could be better, but the fundamentals of URL imports are sound, IMO.
I disagree. It should not default to "https://" (I think defaulting to local files would be better).
Furthermore, I think that it should be made so that the "hashed:" scheme that I had invented (in the Scorpion protocol/file-format specification document, although this scheme can be used independently of that) can also be usable.
And, popular hosts with well-known URL structures automatically templating also I would disagree, although it might do to allow any expressions in place of the string literals and then add functions for abbreviations of some of those URLs, if that would help (although I still think it is unnecessary).
yes, I would prefer to use a centralized registry indeed. However, that's not actually what i'm talking about here. Even just decoupling the import from the package is enough. You can already do this by by pointing a package in package.json to a remote tarball or git repo.
Kidding aside: You should really take an hour and check out the manual and std lib (https://jsr.io/@std). I was surprised how far Deno has come. A lot of pretty useful stuff you would otherwise need tons of NPM modules for.
It's been a really eventful month for Node. First they added node:sqlite in v22.5.0, and now TypeScript support is landing. I love the direction Node is heading in.
Bun came out swinging with strong Node.JS compatibility promises. I have simply replaced node with bun for most of my own work without much effort. The mental effort required is to use `bun` instead of `node` in command line for most of the trivial things.
Probably a bunch of assertion types and general DX. Node:test is just a feature, Vitest is a whole product. The former might be enough for small packages but nowhere near useful for anything non-trivial.
Fair enough. We still use Jest at work for exactly those reasons. In my personal projects, I prefer to minimise dependencies rather than get every DX benefit I can.
for instance exiting the runner on the first error. and the diffs with node:assert are not as nice either compared to vitest.
i'm building a framework with minimal dependencies at https://www.plainweb.dev so i'm super excited about everything that node builds in (sqlite, typescript).
but there is a difference between supporting the bare minimum and actually making it nice to use day to day.
Thank you a lot. Great work. I know it’s still experimental, but over time it will have a big impact on developer experience and will simplify the development workflow for a lot of projects.
this is the roadmap https://github.com/nodejs/loaders/issues/217.
We talked with the typescript team and we will give each other continous feedback on the progression. We made sure to take some precautions in order to avoid breaking the ecosystem. I still think in production, js is the way to go, so users should always transpile their ts files.
Could you expand on why transpiling is the right long-term strategy for production? I get that right now you don't support some TS-specific features like enums. Is that the concern? Those seem like a few legacy exceptions, new TS capabilities will be "just javascript".
Not transpiling would be great to reduce toolchain complexity and eliminate the need for sourcemaps just to understand exceptions and debug.
The first reason is because if we supported ts features that require transformation (such as enume) we would also need to support sourcemaps, so in the first iteration I decided not to, to avoid being overwhelmed. Right now we replace inline types with whitespace, so locations are preserved. We plan to add those features, probably behind a flag at the beginning. We need to move in small steps and think very carefully, every decision could make a huge impact on the ecosystem so I decided to start with the smalled subset possible.
That all makes sense, it sounds like you haven’t ruled out supporting TS directly in production, but it’s complex and you have to move carefully and you’re not sure if you’ll get all the way there. Is that right ?
A long time ago I started converted to using node js for backend work, seemed to offer many benefits over writing code in PHP without bringing many problems of Java. I found node to be somewhat clunky and a language where you had to bolt it together to get the language you wanted. Eventually started writing golang and it felt much easier to write, sometimes way more verbose but the type safety just made coding simpler.
Typescript seemed like a good option but was just another bolt on, I am not sure what value you gain by using Typescript over Golang, you have nice defined types which is great but it does not solve other issues with the language that are resolved in golang (also solved in deno).
One large benefit of using node over golang is the speed of prototyping something which I think having to use type script largely negates, so I can not really decide if this is a good step forwards or is making node loose some qualities that made it a good choice in other ways.
Typescript is safer JS. You're still using JS with TS. The "bolted on" phrasing makes me think your issue may be more the absence of more opinionated frameworks like Django, that manage everything out of the box. I love using Django, but it's a little harder to go off the beaten path with it.
"Bolted on" is how I'd describe it too. Using TS means messing a lot more with random config files. And standard tools like the NodeJS profiler don't work with TS, which hopefully will change soon.
I've never used Django. Express seems a lot nicer.
My phrasing there was a direct comparison of developer experience between golang and nodejs. Golang has a very complete core library, I try to avoid frameworks as much as possible. For me I rarely have to think about the language or ecosystem, everything I want or need is already part of the language, testing, linting are some great examples
I mean the obvious answer is language familiarity,
If your projects frontend code is in javascript/typescript ( which it is ), then using node is an easy choice. Shared libraries, shared types, etc etc
I was in the paradigm, there was very little code reuse from front to backend, some time performing validation I would like to have that option, but I would not have that as a killer feature that determined the language I use.
1. that's a lie, and "lots" of people don't use HTMX (unless I've been living under a rock and there is a non-unsubstantial number of people using it :D )
2. HTMX IS javascript, and you can still use the same familiar packages across front end and backend e.g. lodash
I guess if you already know Javascript, or have inhouse experience vs. learning Go. We use it with cdktf as previous fe experience, seemed logical vs. Go
sorry was going to add that there are probably more javascript developers in the jobs market, although there is a limit to the usefulness of these developers.
In the company I worked at we were fairly small and did not have huge applications running on node, so it made that journey easier
I beg of thee, do not do this. I get that people love typescript but I am already running into a problem where javascript resources are written in typescript by default with nothing for regular javascript. This is the same problem that happened when JQuery hit its peak popularity and an overwhelming amount of resources and guides amounted to "Oh just do this in JQuery"
> javascript resources are written in typescript by default with nothing for regular javascript. This is the same problem that happened when JQuery hit its peak popularity
That's definitely a potential issue - JavaScript is the fundamental standard, not JQuery and not TypeScript. Certainly there are situations where maximum forward-compatibility is important (learning resources are a good example) and for those, vanilla JavaScript is the best choice.
All of the old resources that relied on JQuery are now hopelessly outdated, whereas the contemporary ones that used plain JavaScript are as valid now as when they were written. I'm sure the same will be true of TypeScript vs JavaScript when the next big thing comes along.
Tell me you're out of touch without telling me that you're out of touch.
jQuery was so popular because writing anymore than a few lines of vanilla JavaScript was an *awful* experience due to all differences in browsers.
When things eventually standardized-ish and jQuery became unnecessary, other libraries/ecosystems popped up (e.g. React/JSX) to make writing webapps easier because writing anymore than a few lines of vanilla JavaScript was still an *awful* experience.
When webapps grew in size and scope, other "transpiled" languages popped up (e.g. TypeScript) because writing anymore than a few lines of vanilla JavaScript is *still an awful* experience.
We're stuck with JavaScript due to past decisions, but let's not pretend it's actually a good tool. If it were we wouldn't need 50,000 tools/frameworks/transpiled languages to hide how terrible it is.
This reminds me of io.js situation, where in the end major fork changes were incorporated into Node. This is why I am comfortable staying with Node and npm for my projects - the features will eventually trickle down anyway.
This is the “enterprise” approach and it’s a solid one in my book. I do think drop-ins like Bun and PNPM are always great, however, and we’ve adopted both where it has made sense. I don’t think Bun will make sense very often as it’s only when you really need the performance the added maintenance becomes worth it. Especially right now where it’s not exactly stable for a lot of things. PNPM however is often very great compared to NPM and doesn’t add much maintenance as the tooling essentially gives your developers a very similar experience.
I’m also not sure the features will eventually “tickle down”. I’m not sure NPM wants to adopt the advantages PNPM gives you as an example, and it’s probably a good thing too considering the basis of NPM is just a really solid system to build on top of which it wouldn’t be if it was very opinionated. One of the big issues Node has today is that it was very opinionated with CommonJS, which made sense at the time, but is a ginormous pain in the butt in the modern world. Though the blame is obviously not with Node alone.
Very true, rising popularity of deno and bun clearly indicate that new runtimes solve real issues people have. That's why I mentioned "my projects", your experience may vary.
On the topic of typescript - yes. However Bun has a lot more tools baked in than Node does at (bun test for instance). Would be real nice to see Node start adopting more ideas from Bun and others.
I find it interesting that everyone looks at Bun and shames Node saying they need to catch up to Bun and implement stuff Bun has but Node doesn't, yet nobody is like Bun should catch up with feature parity of Node. Node isn't trying to replace Bun, Bun is trying to replace Node so it should be the one who needs to match parity.
Bun is doing that - every update usually has node compat fixes or improvements and the list of supported modules has gone up significantly since it was released
I'm honestly giddy. This could be the (slow) beginning of a new era, where "JS with types" is finally a native thing.
I'm even willing to forgive all the mess that CJS vs. ESM is if they manage to pull this off.
I hope this sees widespread adoption/usage, which might finally cause some movement to integrate TS into ecmascript after all. Some dynamically-typed language fanatics (which are, in my opinion, completely detached from the reality that static types are what the vast majority of devs want) still have an iron grip on TC39, this might be the start of their end. And good riddance.
Yeah, my personal experience is that easily 95% of devs I work with/have met in person, if not closer to 99%, prefer statically typed languages. Maybe that’s a biased sample, but I do think the overall preference among devs is very strong. I also see JS slowly, more-or-less becoming TypeScript over time.
My days of being a real software developer are long behind me. So I'm totally willing to accept that I'm wrong here. But when I build a POC in particular, there's a LOT of power and flexibility granted by not giving a fuck about types. Suddenly I can accept non well defined data types (depending on my implementation) and can persist data that otherwise would have taken code changes and approval processes to accept. I do believe there is a place for types, but to type all the things is folly. There are capabilities within JavaScript to handle both.
It’s fine to have an escape hatch when you can’t figure/don't care about types (yes sometimes you should use unknown but that’s another topic)
But at least TS forces (if enabled by strict flag) you to explicitly mark all those places. You can always revisit them later.
Case in point: I’ve written A LOT of redux-saga code and figuring out types for that was exceptionally difficult for me. Sprinkled all that with a ton of anys. Had a few bugs but not anything serious.
Finally rewriting all that slop with async-await and am really happy about it
when I build a POC in particular, there's a LOT of power and flexibility granted by not giving a fuck about types
I find the same thing, but only for very small throwaway scripts and the like. For anything beyond like 20 lines of code, I rapidly hit confusing cases like “is this parameter just a map, or a map or maps?” Then I add types and it makes sense again.
Just use `any` or `unknown` when prototyping, then apply types once your happy paths start working for the first time to start catching the unhappy ones.
For sure, if I’m writing a short script, no more than ~200 LOC, Python (without type hints) is my favourite language. But for a sizeable codebase, worked on by multiple devs or even just me over time, I’ve got an extremely strong preference for static types.
Also, FWIW, TypeScript is the most “lightweight” statically typed language I’ve ever used, in terms of extra ceremony/lines of code over a dynamic language. Once you get used to its type system, and embrace the structural typing ideas, I feel the overhead is super minimal. It might slow me down by ~5% on a short script over JavaScript, while dramatically improving maintainability as a codebase grows.
This is not what GP asked for. That's the most requested feature from the state of JS survey.
I bet the percentage of web developers who want this are a tiny minority. There are just too many issues with this.
Static typing without the benefits of better runtime performance, soundness, strong typing etc. is basically just documentation/comments with extra steps.
Also TS is a complex, moving target. Most devs don't want to learn new fancy features every couple of months, but prefer stability guarantees. Several notable projects have moved away from TS. Even Ryan Dahl admitted that integrating Deno with TS was probably a mistake.
Meanwhile you have WASM slowly and steadily getting crucial features on a sound foundation.
I'm extremely cautious about TS and wary of the hype surrounding it.
It's interesting how TypeScript beat Flow in terms of popularity, and yet everyone is now calling for TypeScript to be more like Flow (to focus on plain type checking instead of transpilation; just strip out the type annotations)... And most of those people don't even know about the existence of Flow.
The sad thing about the tech sector is that you can be right about something and yet still lose the hype-wagon popularity contest. Then your competitor copies your original idea... The exact same idea which they had previously claimed was inferior.
It seems that the idea was inferior purely on the basis that it wasn't their idea. As soon as they've appropriated the idea, suddenly it's the best idea in the world.
It's about time for TC39 and Microsoft to standardize TypeScript as part of JavaScript. Not "types as comments" either, but actually TypeScript, minus the non-standard runtime semantics and modulo whatever changes are necessary to integrate the grammar.
So many runtimes and tools are integrating TypeScript now, and with multiple implementations, that a real standard is necessary. It'll be much harder to evolve TypeScript because it'll have to stay backwards compatible, but it's grown to that point now, imo.
Maybe it's helpful to analyze TypeScript's track record in deprecating features and creating breaking changes, which could be a big red flag to TC39. I'm all for typing support but my work is mostly prototyping on short lived projects. People maintaining production systems that will eventually become legacy systems might have a different opinion.
I don't have a list and I don't even write TS (or do much web dev in general), but I do follow their announcements and it seems every one of them brings new big type things.
Nevertheless, I don't see TS support in e.g. browsers being anything useful, as in practice all JS code deployed is already packaged somehow, so the stage to convert TS to JS (and then also checking the types..) fits that just fine. It's useful for hobbyists, but I don't that is a reason enough to come up with a standard.
That has no bearing on the comment you're replying to.
The point is that the types could be TypeScript, Flow, Hegel, or something else. The browser won't perform type checking, it will just ignore the types.
> This proposal aims to enable developers to add type annotations to their JavaScript code, allowing those annotations to be checked by a type checker that is external to JavaScript. At runtime, a JavaScript engine ignores them, treating the types as comments.
"types as comments" is the term that the champions and reviewers of this proposal have been using.
It refers to how the types are parsed: aside from some kind of standard start and end delimiters, the parser does not try to parse the expression-level type syntax. Type expressions are just strings of characters. This way you can have basically any syntax at all for types.
The proposal would explicitly treat supported type annotation syntax as comments in the grammar. It is definitely types as comments, even if it is also type erasure.
And it would apply to Flow’s type annotation syntax which is also not presently treated as comments, at least for the very large subset of that syntax which overlaps with the proposal.
Yeah, that was from 2016 or so. But a TypeScript spec is different than folding TypeScript into the ECMAScript standard. Some parts of TypeScript would have to be dropped or changed for that to work.
That doesn't standardize the syntax, only types-as-comments. It would have to standardize some delimiters for type expressions, but that's it.
I do think that the semantics should be standardized too, otherwise you have non-interoperable types. The goal should be that you can use two libraries together without having to make sure they use the same type-checker.
TS is an intentionally unsound type system that tries to allow you to type your code no matter if it will run like garbage, is unreadably complex, and uses terrible parts of the language.
What TC39 needs is a type system that limits what you can do to things that are sound, performant, and good practice. TS is the exact opposite of this.
TC39 standardizing TypeScript as part of JavaScript would be a critical error. TC39's job is to look at all the options and make something better. TypeScript won out in popularity due to its tight integration with VSCode. it's not necessarily the best route for the core language.
Another reason is probably performance. Executing TS would require a lot of extra CPU and even more energy than JS.
A lot of effort and money has been invested into JS engines. I wonder if making a TS native engine (which nobody has made yet) from scratch might make more sense than adapting JS engines to run TS.
This is a persistent meme that has no basis in reality. A TypeScript engine is a JavaScript engine, since everything that can be done in JS can be done in TS. It's plausible, maybe, that there could be some additional optimisations on TS code where the engine is sufficiently happy with all types in a subset of the program. But that would be on top of all existing JS engine features, unless you want your engine's performance to suddenly degrade if you stray outside the fully-statically-verifiabe-TS happy path.
That's a pretty obtuse interpretation of the comment. Browsers natively being able to run Typescript code / .ts files instead of requiring transpiling to plain Javascript would be a large boon to the TS ecosystem by making basically everything easier. Even if it's just stripping the TS and running the plain JS it would already be helpful, but it running the typechecking beforehand would be wonderful.
Maybe I was reading too much into the comment I replied to, but to me "a typescript engine" implied more than "ignoring the types" (which is the current TC39 proposal).
And I was replying based on what I've seen other people saying whenever the subject comes up; apologies if I misread.
Browsers doing type checking is a pretty fraught idea IMO, at least with Typescript and not some other statically typed language entirely.
Taking one link out of the toolchain (tsc) would already be a huge blessing.
And naive me hopes for a future where in my web-app I can set a policy that any non-ts, type-incompliant code is not allowed to run.
The amount of exceptions I get in the console from terrible garbage-code outside of my control but that I have to include because enterprise is staggering. Would love to have a meta-setting which would just kill them if they can't be arsed to even have a modicum of code-hygiene (sorry for the rant)
Why is taking out the part that actually checks the types at the developer's side a huge blessing?
Or if you are hoping to get the benefit of type checking in the browser itself (taking the same sweet time as tsc, but this time on every browser instead of once in the CI), then how long would you want to wait to be able to actually use the new typing functionality described in e.g. the latest TS annoucement? https://devblogs.microsoft.com/typescript/announcing-typescr... .
Because it would take a while until that would then become the standard and then become available in every browser. And you still need to provide the JS versions, because not every browser is going to support TS.
In the meanwhile you could just keep using tsc just as before and get access to new functionality immediately.
(I imagine you could run tsc in the browser right now if you really wanted to.)
> Why is taking out the part that actually checks the types at the developer's side a huge blessing?
Oh, no, certainly we want to keep type-checking in the pipeline, somewhere.
However, if the browser "understood" typescript, your codebase could have immediate hot-reload, without any transpilation in-between. The type-checking could then be (and already is when using something like esbuild/swc) an entirely separate process that happens independently.
Webpack's HMR is pretty good, but not having to modify the code at all to have it work in the browser, that'd be much much better :)
... the browser being able to typecheck (and reject violating code) itself is certainly something I'd love to see eventually, but fully agreed, this is not happening anytime soon.
One advantage of Typescript in context of performance should be that it nudges one to not change the 'shape' of runtime objects too much, this should allow less runtime overhead in JS engines because code doesn't need to be re-jitted as often.
This doesn't require the type annotations at runtime though, it's just a side effect of code being written against a static type system.
A lot of apps would (or at least should) still want to strip types for bundle size reasons though.
To take one extreme example, a library I work on includes an API for calling a JSON RPC server. Instead of manually implementing each call, we use a proxy object that converts any method call on it to a JSON RPC call. Then layer on types so given an RPC object you know every method on it and have typed input params and output. This means you can have any number of methods without increasing your bundle size, because all the types disappear at runtime. It also means you can add your own methods if you’re talking to a server that implements custom ones by just defining types. If you shipped this to the browser with the types then it’d be a much bigger bundle than without them.
My take as an outsider: Google has absolutely no interest in this, especially with their recent cost-cutting measures. Google cares about things that make the "web" better for the end users so that they can sell more ads, not developer tools. TypeScript or JavaScript doesn't matter that much to Google, and actually Google probably doesn't want to see TypeScript files distributed over network (which doesn't make much sense in the first place). In all honesty, Microsoft understands development experience much better than Google and most other companies. They literally own Visual Studio, Visual Studio Code and GitHub and sell products/services for money.
What broken tooling are you talking about? tsc is broken?
IE failed because it was a horrible browser that didn’t evolve for years and was incompatible with major web standard developments. Nothing to do with typescript, an open source, best in class type system and type checker.
IE failed because they tried to define the standard as themselves. i argue we’re witnessing that again from the same company that only gave up that strategy once they had typescript, github, npm locked in.
i’m not bullish on political strategies being technical solutions, which is the premise.
typescript has nothing to do with internet explorer, true, but is it really not obvious that it is the same tactic as a different brand? become the standard, steer the committee.
and broken in that copying code between systems requires compatibility between configurations, which should be a red flag for any language.
Yes, the feature is about being able to run typescript scripts. It’s not a type checker, it is similar to ts-node, deno, bun, etc. Typescript has been designed for that specific purpose.
seen and heard, my original point was that typescript is not poised to be a tc39 standard.
this is still a “runs some typescript” and “not runs every typescript file”
“ At least initially in this PR no trasformation is performed, meaning that using Enum, namespaces etc... will not be possible.”
this type of nuance is the core of why typescript is a headache for any organization with more than a single codebase— javascript is portable, typescript is in theory, but not in observed practice.
support for typescript as long as you are only using it for type checking, not if you are also using features that are not supported in the javascript version you are targeting.
> not if you are also using features that are not supported in the javascript version you are targeting.
reply
This is only 'half-assed' anyway, TS will only emulate new language features on older JS target version, but not any Javascript runtime features (like new Object methods). For the latter you will still need a separate polyfill solution.
IMAO writing (hard-coding) TypeScript is deprecated and a waste of time. With all tech available nowadays it is possible to do the entire type check automatically in the IDE, even more so now with the help of AI. It's just a matter of time when we stop hard-coding type info. Better invest time and money in getting the IDE's work better for us.
Why would you describe that "yolo"? If you're writing TS, you already have a TS linter that checks whether code has typing problems or not (at least, I certainly hope you do?). It's not really Node's job to do that linting, its job is to execute the JS that's hiding in the TS. It'd be "handy" if it did, but it'd also be a bit weird when there are already TS linting tools. It'd just hold up landing any sort of TS support that much longer.
It would be nice for debugging if at least simple npms could just bundle their .ts files without any processing, so we could see the comments and types as they existed in the git repo. Apps can always minify them later.
You can simply create npm packages which contain only the 'unprocessed' TS source files (or really any type of files - for instance I experimented with using npm as package for C/C++ projects in the past, it works just fine). Pre-bundling or compiling from TS to JS is just a convention. And in case of bundling not a good one IMHO, because bundling should only be a final step in the top-level project. One good reason to compile the package content to .js/.d.js/.map files is that the resulting package is usable both in JS and TS projects.
The nice thing about this change to Node.js (when it’s no longer experimental) is that you could just distribute .ts files and JS projects could use them.
type: module requires file extensions and does not support importing folders like most people are used to so it will not be compatible with most existing Typescript code.
Will it transform esm import syntax into require statements?
I'd prefer it break existing code to enforce correctness
I was not aware of this. I did see ReScript covertly bring some sanity to some NodeJS projects, and more than once. So this history of the project is worth digging into. Thank you for surfacing this insight.
Side note, but IMO Typescript is too complicated. They should have stuck to a reasonably simple type system but now I see projects with incomprehensible and frankly unmaintainable typescript consisting of extremely complex generics, type conditionals, and type constraints. Basically if you aren't careful you'll find your project metaprogramming in typescript's turing complete meta language...
TypeScript in its current usage reminds me of Hello, World! or FizzBuzz Enterprise Edition. There's almost more code dedicated to typing than the actual running software itself in some codebases I've seen.
The authors trick you with reasonable examples on https://www.typescriptlang.org, but in the wild, you have these ridiculous codebases that couldn't control themselves and they have this insane ratio of multiple declaration files to actual source files and you have to ask yourself, "Are you writing software to get something actually done, or do you just like write type definitions?"
Even people who write in C++ don't go to the lengths that TypeScript users do. It's super weird and cult-like.
I also get lost in types sometimes, but the point of types is that they help you. Typescript lets you just use `as any` and `as unknown` as you please, but you want complex constraints you will need complex types.
There are some type libraries that parse GraphQL queries and CSS selectors. They’re crazy to look at but they’re hugely helpful.
Maybe I just have bad luck, but most of the libraries I've tried that are "crazy to look at" seem good in theory but are janky in practice. For example, openapi-fetch (https://github.com/openapi-ts/openapi-typescript/tree/main/p...), on paper seems great, but has lots of jank in practice.
And I would wager the bugs and jank are in no small part due to the extremely complex generics/constraints.
To be clear, I do like the type-checking benefits of TypeScript, but it requires some discipline to keep it simple. Get one unchecked TS astronaut on the team and the TypeScript can get complex and esoteric very quickly.
This is something I've struggled with as a mostly solo dev. I've most often just stuck with vanilla javascript because of course that's good enough, but definitely there have been times where I hoped I had some typing helping me out. Alas, I haven't quite finagled the art of finding a way to use it "just a little bit."
I have mixed feelings about this. While I do use TS with Node.js today and absolutely like the concept, its type system is still far from something mature and stable like C#. We keep running into ceilings (EDIT: lack of completeness/depth, not lack of complexity) all the time, and TypeScript questions on Stack Overflow is basically a library of workarounds. Mostly bad ones. So if I worked on Node.js I would prefer it to evolve more before actually marrying and having kids with it. But at the same time, I like the direction Node.js is taking.
What's an example of a ceiling? Out of all the mass-market programming languages, TS arguably has the most advanced type system in the world. It's a modern marvel that they got it working on top of Javascript.
Certainly advanced, but not mature in my experience. Using e.g. classes and inferred generic function arguments, quickly reveals a lot of features that are missing. Often some similar feature is present but it lacks depth/completeness. Lots of good discussions to read in TS repo on GitHub if you're interested: Optional generic type inference, extends oneof, generic values, keyof a subset, conditional types, etc.
I want to emphasize that the reason we keep running into "ceilings" is probably because of its advanced type system. Libraries and frameworks are using those type features and when we can't keep building on the type - we end up casting to unknown and reconstructing it. Which feels worse than not being able to construct that complex type at all.
I think it took the seemingly impossible challenge of bringing typing to a dynamic language that made typescript so powerful in the first place.
All other static languages start bottom up, simple to more complex, but end up getting boxed in by their own design. TypeScript started top down, trying to map itself on to a fully dynamic language. Never getting boxed in, just trying to 'fill' the box that is all the possibilities of JavaScript. 10 years on and TypeScript is still exciting, making significant updates and improvements.
Typescript isn't particularly powerful compared to other non-mainstream languages, though, which is why the parent comment was careful to add that caveat. Which is to say that I'm not sure the idea that "all other static languages" start simple and get boxed in stands up.
You may have a point that Typescript would have been relegated to obscurity with all the others had it tried to start "top down" as a brand new language. There may be some truth that it is a necessity of a language to start simple in order to become accepted in the mainstream and that Typescript only made it because it rode on the coattails of a language that also started simple: Javascript.
I don’t know what you mean here by advanced? If you mean the sheer amount of fuckery they have to do in order to make it work with JS perhaps you have a point.
If you mean expressiveness or consistent or soundness then no, it’s actually very bad compared to almost anything else and I think the longer it goes on the more it starts to feel like a house of cards.
The upside I guess is that whenever Safari decides to get their shit together Web Assembly is well placed to get us out of the scenario where we are forced to use JS and as an extension Typescript at all for most things and actually good language choices with reliable type systems like Dart, Kotlin and C# all become viable options.
There is no way I’d choose JavaScript over those other options in the majority of scenarios unless I was forced to.
> The upside I guess is that whenever Safari decides to get their shit together Web Assembly is well placed to get us out of the scenario where we are forced to use JS and as an extension Typescript at all for most things and actually good language choices with reliable type systems like Dart, Kotlin and C# all become viable options.
Out of those three only Dart has nice DX story compared to JS world.
In C# you can't work with optional generics because an optional reference type is different from an optional value type.
C#s poor type-inference often requires you to type out types thrice. You can't declare constants or class members with type-inferrence.
The only way to define sum-types (A | B | C) is through intefaces and I'm pretty sure they can't be sealed. Defining product-types (A & B & C) is impossible.
Sorry I probably used the wrong term, not a native English speaker. I didn't mean lack of complexity or lack of "features" but rather the lack of carefully thought-through feature "depth". Like, we can infer generic arguments which is nice, but then we try doing that with some keyof complex type and it doesn't work. And later we find an issue on GitHub saying that it's not implemented. Which is fine, I love TS anyway and it's evolving.
Well said. Bearing that in mind, we've found that JSDoc is a reasonable substitution for some TypeScript applications; however, JSDoc has limitations that we've ran into frequently as well.
Yes. The question is where you do the conversion and how much of a conversion to do. This allows a key conversion (type stripping) directly at load time by Node rather than needing an extra step (an external type stripper such as Typescript or esbuild) sometime before passing the file to node for loading.
In our codebase we started to disallow enums in favour of string literal types, and once folks get over the ingrained "this needs to be an enum" (coming mostly from other languages like Java), it's not much missed.
Enums are one of the very few things in typescript that seem to not have turned out that well, but it's relatively easy to work without them with string-literable types and such, derived from some const in case they're also needed at runtime.
There are union types for strings; and there are plain javascript objects (typed as const) if "namespace.name" syntax is desired. With these available, what is the point of enums?
Yes. Both namespaces and enums (and probably the "private" keyword on class methods) are an early addition to the language, which would have never been added if typescript from the very start aligned closely with Ecmascript.
Consider for example: `foo < bar & baz > ( x )`. In TypeScript 1.5 this parsed as (foo<bar) & (baz > (x)) because bar&baz wasn’t a valid type expression yet. When the type intersection operator was added, the parse changed to foo<(bar & baz)>(x) which desugared to foo(x). I realise I’m going back in time here but it’s a nice simple example.
If you want to continue to use new TypeScript features you are going to need to keep compiling to JS, or else keep your node version up to date. For people who like to stick on node LTS releases this may be an unacceptable compromise.