> One of the key ideas with Ezno is that it attempts "maximum" knowledge of a source
I've been working on a new TypeScript-like language myself from scratch, which among other things takes this approach. Mine can do some of the numeric stuff shown here, but I'm jealous of (inspired by? :)) some of the crazier stuff going on here, like usage-based argument types and tracing not just value shapes but value reference identities
The "automatic generics" feature in particular is absolutely bonkers. It never even occurred to me that you could do that. I'm wondering if there are unforeseen edge-cases, but also wishing I had it at work. Clunky generics syntax is one of the worst parts of TypeScript, even while generic function types are one of its best parts.
Wow, and side-effects-tracking too! Amazing
I am curious whether some of these checks will be limited by JavaScript's dynamism. Part of why I'm doing mine as a new language is that JavaScript's semantics are just way too flexible to form some of the guarantees I feel like you need for some of this aggressive inference. But now I'm questioning that.
Either way, this is insanely impressive. Definitely not just yet-another-JavaScript-toolchian.
Thank you. Just checked out the Bagel post (https://www.brandons.me/blog/the-bagel-language) and it looks really cool. Identifying pure functions (whether that is by syntax annotation or from synthesis) is a really good idea, gives me some ideas for doing function inlining in Ezno. I like the "Misc niceties" section, a few of those may of may not be on Ezno's todo list :)
The automatic / inferred generic restrictions is quite cool. https://hegel.js.org/ got there before me! Basic restriction modification is quite simple e.g. `(x) => Math.sin(x)`, x wants to be a number so can add that restriction. It gets more difficult with higher poly types. `(someObj) => Math.sin(someObj.prop1.prop2)` requires modifying not just `someObj` but a property on a property on it. And `(x, y) => printString(x + y)` requires doing even more complex things. But its definitely possible!
Replying to both of you, I'm also working on something similar, but in Lua!
I've been searching for people who are doing this for inspiration and maybe exchanging ideas in chat. What I have seems very similar to Ezno when it comes to tracking mutations and side effects and the key idea of "maximum knowledge of a source"
I struggle a bit with communication about this topic as my way of learning has been more of a self taught trial and error process. It almost feels like the people who are building typesystems is speaking another language (especially those with a Haskell background), but I'm slowly starting to understand some of it. Your blog post however was easy to understand.
Inferring from usage is something I've been thinking about every now and then, and it seems doable if the type can mutate from some unknown type to the first expected type. In practice this doesn't seem to become a big deal if you always require input types to be typed (from arguments or some other external) but it's an interesting problem.
I think the way I would approach mutating someObj.prop1.prop2 is to do a mutation of the field when calling the function signature. I'd have to know a few things about prop2 though.
So it would effectively be something like Math.sin = (val) => {let [parent, key] = parentObject(val); parent[key] = number }
Then it would make use of the way i track mutations already.
My project's status is that it's not production ready but it's in the open. There's some use to the playground to analyze what Lua scripts might possibly do.
Not sure if I have any big plans, I just think it's fun and interesting to work on and maybe I want to use it for my other Lua projects. I'm not very happy with the codebase as I've written it in Lua. I'm trying to bootstrap it but it's growing in complexity faster than I can bootstrap it. But one step at a time and I should be there someday, maybe I can even port it to some other language like Rust when I have a better understanding of what I'm actually doing. :)
Fwiw, I took the approach of having a hard separation between stateful and functional code partly so that I could dodge most of these hard questions around mutation tracking ;)
The stateless code can be really aggressively inferred, and even re-ordered, inlined, detected as dead and thrown away, memoized, whatever else is useful. And then the (hopefully small) stateful code can just be typed at roughly the same level TypeScript already does
Automatic generics is how c++ templates behave. It is universally agreed upon to be absolutely horrible, now being remedied with what they call Concepts to constrain the generic signatures at least a little.
The issue is that when you have errors, it is impossible to determine if the error is at the caller, the callee or at the type signature. Essentially you have an equation of 3 dimensions with 3 unknowns. Same issue propagates into code navigation and refactoring.
It’s like regressing into a dynamically typed language except the errors are given during compile time. For small simple functions like the addOne example, it’s great to reduce boilerplate typing, but once you start relying on it for more complex types it can easily go out of hand.
I don't think the problem you describe would happen in the version seen in Ezno, since the arguments and return can still get explicit type declarations and the "generic" version is just a more-specific refinement of that
As a programming language geek, TypeScript is one of the best things to ever happen to the industry. It's finally dispelled the notion that "types == Java == bad/annoying", and shown how powerful and convenient a type system can actually be.
(This is something that's been possible for decades, but it never hit mainstream before as it's hard to implement it well enough to satisfy the silly preferences of us typical programmers :)).
Typescript has a decent type system, but it is really held back by being "just" a type checker for JS, with all the JS semantics.
The way sum types are done in TS is really awkward.
The type system is unsound.
What's worse is that you never have a guarantee that the types are actually correct at runtime, due to bad third party typings, compiler limitations, use of any, ...
It's still a lot better than using plain JS, and a lot of the limitations aren't by choice, but come from the need to compile down to and remain compatible with plain JS.
I wish Microsoft would make a language almost exactly like Typescript, but where code has to be strictly typed (no any, unknown, etc.) and it would compile to a normal binary, with some sort of GC, for multiple platforms.
It would hit the sweet spot for me, I know Rust is popular these days, but it seems like it's made for type astronauts, and sometimes I just want to write some code and get things done quickly and don't care about squeezing out every last drop of bare metal performance or abstracting seven layers of types to please a borrow checker.
That still allows `x as string` and many other traps. Typescript just isn't type-safe, and this has bitten me so many times in real world projects with strict mode and all. The best thing I can say about Typescript is that it's an improvement over Javascript.
C# does not have "type" keyword and really lacks behind with enums. You can theoretically replace algebraic type system with inheritance, however that's a lot of times ugly and you propablly shouldn't do it.
> I wish Microsoft would make a language almost exactly like Typescript, but where code has to be strictly typed (no any, unknown, etc.) and it would compile to a normal binary, with some sort of GC, for multiple platforms.
There’s no such thing as “fully sound” type system, outside theorem provers like Lean. Every type system has a trapdoor. Rust has “unsafe”. Haskell has “unsafePerformIO”. And TypeScript has “any”. Soundness is just as much a property of a language’s culture as its implementation. Practically I find that TypeScript is sound enough for my purposes when most of the strictness options are enabled.
I agree with your point that TS is held back by being "just" a type checker. In particular, in Rust I can write code like this:
let _: Result<Vec<_>, _> = some_iter.collect();
or like this:
let _: Vec<Result<_, _>> = some_iter.collect();
The only thing that has changed here is the type annotation. Both will compile, but the implementation of `collect` is determined by that type. As far as I know this isn't the case in typescript, although please tell me if I'm wrong, I have limited experience from it and I'm thinking more from mypy perspective.
This ends up meaning that my types don't drive the program as much. There's this sort of declarative "here's the thing I want, use an implementation of that method to give it to me" that feels very powerful. I miss this a lot with mypy, my type annotations feel very much like annotations, they are a separate part of my program.
If TS were to lean into itself as its own thing, not just a way to do better JS, I think that'd be amazing. I'd love to see an AOT compiled TS where types are a semantic part of the program, I'd love to see it drop `any` entirely as well, and drop support for using JS in TS.
Another annoying part of this is how every function declaration effectively doubles the list of parameters, once as type signature and once in the usual JS function syntax. And those don't even have to match, neither in length nor name of the parameters.
I think a lot of those decisions were made to maximize easing existing JS code into typescript. And it probably helped with adoption. Who knows if or how much it would have struggled with traction if anyone wanting to use it would have to start from scratch or something like it.
Anywhere in non-enterprise web development in the heyday of Ruby/ python / php.
Static typing fell out of favor hard among certain crowds. Typescript brought many of them back to the world of static typing, and all three of the aforementioned languages are getting more static support, so it isn't nearly so prevalent as say 10 or 15 years ago.
> Ezno's type checker is built from scratch. Getting the features I wanted requires a lot of different functionality and needed several new ideas that as far as I know aren't present in any type-system or existing checkers.
I would disagree this is a TypeScript compiler. It's a mostly-typescript-compatible compiler/type-checker.
The Typescript team iterate on it reasonably quickly. If you fall behind as the Typescript team adds new features, and you cannot check that code, is it still "Typescript"?
That is actually completely fair - I still had OP's editorialized headline in mind when reading the site, and didn't notice all it says about typescript is "The checker is fully compatible with TypeScript type annotations" which is probably fair!
What I'd really like is a tsc that creates code to check types at runtime, like for API boundaries and parsing unknown input. Kind of like a built-in Zod. Maybe it's just an automatic type guard anywhere you have an "as SomeType" or an ignore directive.
You can use io-ts [0] to define your types, and it'll generate functions to typecheck for you. Syntactically it's a bit gnarly and the documentation isn't great; a first-party solution would definitely be nicer. But it works, and it's amazing that it works.
It's a neat library, but you end up defining your types in an external (outside of TS) syntax, and you lose a lot of language-server features. Also, last I checked it could not handle generics. But it's been a while.
You don’t lose any language server features, you just access them slightly differently. Each function in io-ts, zod, and other TS libraries like them) are type guards, and have companion utility types to extract the static types from their runtime definitions (with `typeof`). I’m certain that io-ts handles generics (I’ve used it to do so), albeit again slightly differently, in that you need to define them as generic type guards.
I think the clamor for runtime type checking in TS is partly because type guards and their benefits could be better explained at the language level, and partly that libraries implementing them effectively are mostly aimed at users who already understand those benefits.
You really don’t want pervasive runtime type checking, except at API boundaries where data is untyped (i.e. the appropriate type is `unknown`). Type guards are exactly designed for that use case. For other cases where the types are well known, runtime type checking is needlessly expensive and redundant.
Agree with all of that, but also, it's been at quite a while since I used it. I'm sure it's improved a lot since then, and my memory might be off as well. I really remember not being able to get it to work with generics. But maybe I didn't read the docs deep enough.
We just do what you describe now, and don't even really want automated type checking. We just write our own assertion functions. The weakness of writing your own is that you have to sort of "manually" upgrade them when there type changes, or they drift and your editor won't tell you about it.
If your editor isn’t telling you there’s a type error in your guard, that’s usually a good sign your guard is too large. Even if you’re rolling your own (which seems an odd choice but I’ll take it as read that you have reasons), it’s a good idea to take inspiration from the prior art. With libraries like zod/io-ts etc, it’s harder to end up with a mismatch like you describe than to always have your types and guards in sync, because the guards are built from small composable primitives and the types are derived from them. Larger guards built without that are basically a lot of ceremony around `as Foo`, with all the false sense of safety that implies.
Not trying to dissuade you from rolling your own, mind you. Heck, it’s been stalled for a while as I focus on other things, but I’m rolling my own whole library (also for reasons, foremost of which is handling JSON Schema generation and validation at runtime without relying on codegen like other solutions do).
It's not external to TS. You write your types by passing object literals to the functions that generate the validators; TypeScript then infers shockingly precise types, which can be extracted using TypeScript's type manipulation utilities.
I really wanted this in my earlier usage of typescript as well.
But the solution really is to assume `: unknown` at API boundaries and run the values through assertion functions: `isSomeType(x: unknown): asserts x is SomeType`.
After using this sort of pattern, I don't think I would want automatic runtime checks anymore, because creating your own explicitly and calling it explicitly works out not so bad.
yeah. You have to assert an object (`Record<string, unknown>`) type first within your asserter (or array or whatever). We ended up having whole stacks of re-usable assertion functions to be used at these boundaries (re-usable on server side too if you're using node!)
Just a simple function (ensureType) that checks primitive types (angle brackets for min length). You reconstruct the object using ensureType, writing it back into itself. EnsureType returns what is passed in, with the corresponding type. Has worked well.
At some point I used a TS->JSONSchema generator and that worked out pretty great. Obviously it would be greater to be built into the language but I think that's always going to be out of scope of what TS aims to do.
This x1000! Type guards are a joke and real runtime type checking could be so easy if they prioritized it. Yes you can use zod but then you have to define your types through zod which isn't as neat.
Wow, this is seriously impressive work, if it really works as well as the code examples make it seem. Many of these ideas I have not seen in other JS typecheckers:
- Inferred generics that work
- Effect tracking
- Prepack-style AOT evaluation, but for typechecking
- JSX built in, with proper inference
- Using a typechecker to reduce the amount of work React needs to do (!!!)
- Using a typechecker to make SSR more efficient
- …
These are super interesting and very novel. I hope to see this open sourced soon —- curious if this kind of approach will work at scale.
I didn’t realize it wasn’t Enzo until reading your comment just now, and that’s after skimming the article for several minutes. Now I’m wondering if I’m dyslexic or what.
All of these compiler/transpiler related projects, especially for big mainstream language like TypeScript seems like a full time job, more so than any other open source initiatives. You need to keep up with the spec, constantly read the source codes, etc. Kudos to those who are providing their time to do this.
Remind me of Hegel (https://hegel.js.org/) but with typescript compatibility. It's a pity Hegel develompent stopped because war in Ukraine (https://github.com/JSMonk/hegel/issues/363). Looks really cool but in my opinion would better to concetrate on type checking living other things (vdom, ssr) out or in optional plugins
It appears from their repo that they have 5-6 people working full time on this. Assuming about 250k per developer that puts their burn at 6 * 250k = 1.5 million a year. So yeah they have some time. Although I would be severely concerned at their progress considering it’s already been a year and they don’t seem to have shipped much. No proof of PMF or any sort of revenue with over a third of their runway depleted should have them hustling to get on the boards.
If you're looking for evidence in the public repo, consider what's not in there: one co-founder is missing, and so are the commits from the other author/co-founder for over a year. Not the signals you want in a dev-tooling project.
Except how many users do they have? If I were a VC fund looking to give them another round, I’d want to know their metrics, whether that’s users, downloads, or revenue. Judging by their blog, it’s been a year since they raised the money. Have they produced anything other than a bunch of code? Because code isn’t worth much without users.
Based on the release notes, the Typescript team is working on faster type-checking as continuous goal and has many big long-term irons in that fire. Though a lot of the more recent work has gotten into the performance of projects already scaled into project references and incremental builds, so it may not yet be obvious how much work has gone into performance if you consider your projects small/medium-sized.
It can sometimes be quite an effort to refactor a medium sized project into project references with incremental builds and there's currently no obvious moment where Typescript knows to tell you "you've got a 'large' project now, if you split this workspace into multiple tsconfig projects that smartly reference each other and switched to incremental build flags you'd get a bunch of performance improvements". So that's still a matter of figuring it out for your own projects if you can take advantage of that (and how you would take advantage of that).
My understanding reading it is that it’s implementing something more like SolidJS (which is also a no-VDOM reactive library which looks a lot like React) and Qwik (which also serializes/resumes from state in the DOM), but built into the core compiler rather than as a separate library/JSX transform. I doubt it’ll make React obsolete, especially as React expands in scope (e.g. Server Components), but it might be a good replacement for a lot of use cases. It’s certainly got my attention!
No defiantly not an intention to render anything obsolete. I think React will be around for a while.
Instead if you like stricter types, `.map`, and `.push` JSX and want to reduce the runtime abstraction by moving it to compile time then it offers a possible alternative to React. Haven't gone into too much yet but there is a probably a lot of React applications that are too complicated for the optimiser to work :( and there it won't be an alternative.
Lots of great alternatives to React these days as well which I also want to promote!
Typescript is great, for team collaboration, code refactor, etc, but I don't think every project should use it, sometimes javascript is much easier and faster to implement.
> VDOM is a virtual representation of the document, actual DOM references the document (e.g. .click() isn't on VDOM structures).
This sentence is unreadable. The superfluous/incorrect use of parens for `.click()`, in combination with this page's style sheet and the way that paragraph wrapped in my browser are all things that didn't help, but eventually I was able to move past it. (It still doesn't make any sense, but I know that part isn't where the weirdness is.) Still unreadable. Bad for something pulled from a written work that's supposed to be a list of definitions you reference.
I've been working on a new TypeScript-like language myself from scratch, which among other things takes this approach. Mine can do some of the numeric stuff shown here, but I'm jealous of (inspired by? :)) some of the crazier stuff going on here, like usage-based argument types and tracing not just value shapes but value reference identities
The "automatic generics" feature in particular is absolutely bonkers. It never even occurred to me that you could do that. I'm wondering if there are unforeseen edge-cases, but also wishing I had it at work. Clunky generics syntax is one of the worst parts of TypeScript, even while generic function types are one of its best parts.
Wow, and side-effects-tracking too! Amazing
I am curious whether some of these checks will be limited by JavaScript's dynamism. Part of why I'm doing mine as a new language is that JavaScript's semantics are just way too flexible to form some of the guarantees I feel like you need for some of this aggressive inference. But now I'm questioning that.
Either way, this is insanely impressive. Definitely not just yet-another-JavaScript-toolchian.