JS JITs use something called an inline cache (IC) to speed up the lookup of object shapes. JS JITs consider it to be a different shape if the keys are different (even if just one is added or removed), if the values of the same key are different types, and if the order of the keys change.
If you have a monomorphic function (1 type), the IC is very fast. If you have a polymorphic function (2-4 types), the IC function gets quite a bit slower. They call 5+ types megamorphic and it basically foregoes IC altogether and also disables most optimizations.
TS knows how many variants exist for a specific function and even knows how many of those variants are used. It should warn you when your functions are megamorphic, but that would instantly kill 90% of their type features because those features are actively BAD in JS.
Let's illustrate this.
interface Foo {
bar: string | string[]
baz?: number
blah?: boolean
}
Looks reasonably typical, but when we use it:
function useFoo(foo: Foo) { .... }
useFoo({bar: "abc", baz: 123, blah: true}) //monomorphic
useFoo({bar: "abc", baz: 123}) //now a slower polymorphic
useFoo({bar: "abc"})
useFoo({bar: ["b"], baz: 123})
useFoo({bar: ["b"], baz: 123, blah: true}) //we just fell off the performance cliff
As you can see, getting bad performance is shockingly easy and if these calls were across five different files, they look similar enough that you'd have a hard time realizing things were slow.
Union/intersection aren't directly evil. Unions of a single type (eg, a union of strings) is actually great as it offers more specificity while not increasing function complexity. Even if they are a union of different primitive types, that is sometimes necessary and the cost you are paying is visible (though most JS devs are oblivious to the cost).
Optionals are somewhat more evil because they somewhat hide the price you are paying.
[key:string] is potentially evil. If you are using it as a kind of `any`, then it is probably evil, but if you are using it to indicate a map of strings to a type, then it's perfectly fine.
keyof is great for narrowing the possible until you start passing those keys around the type system.
Template unions are also great for pumping out a giant string enum (though there is a definite people issue of making sure you're only allowing what you want to allow), but if they get passed around the type system for use, they are probably evil.
Interface merging is evil. It allows your interface to spread across multiple places making it hard to follow and even harder to decide if it will make your code slow.
Overloads are evil. They pretend you have two different functions, but then just union everything together.
Conditional types are evil. They only exist for creating even more complex types and those types are basically guaranteed to be both impossible to fully understand and allow very slow code.
Mapped types are evil. As with conditional types, they exist to make complex an incomprehensible types that allow slow code.
Generics are the mother of all that is evil in TS. When you use a generic, you are allowing basically anything to be inserted which means your type is instantly megamorphic. If a piece of code uses generics, you should simply assume it is as slow as possible.
As an aside, overloads were a missed opportunity. In theory, TS could speed everything up by dynamically generating all those different function variants at compile time. In practice, the widespread use of generic everything means your 5mb of code would instantly bloat into 5gb of code. Overloads would be a great syntax to specify that you care enough about the performance of that specific function that you want to make multiple versions and link to the right one at compile time. Libraries like React that make most of their user-facing functions megamorphic could probably see a decent performance boost from this in projects that used TS (they already try to do this manually by using the megamorphic function to dispatch to a bunch of monomorphic functions).
If you have a monomorphic function (1 type), the IC is very fast. If you have a polymorphic function (2-4 types), the IC function gets quite a bit slower. They call 5+ types megamorphic and it basically foregoes IC altogether and also disables most optimizations.
TS knows how many variants exist for a specific function and even knows how many of those variants are used. It should warn you when your functions are megamorphic, but that would instantly kill 90% of their type features because those features are actively BAD in JS.
Let's illustrate this.
Looks reasonably typical, but when we use it: As you can see, getting bad performance is shockingly easy and if these calls were across five different files, they look similar enough that you'd have a hard time realizing things were slow.Union/intersection aren't directly evil. Unions of a single type (eg, a union of strings) is actually great as it offers more specificity while not increasing function complexity. Even if they are a union of different primitive types, that is sometimes necessary and the cost you are paying is visible (though most JS devs are oblivious to the cost).
Optionals are somewhat more evil because they somewhat hide the price you are paying.
[key:string] is potentially evil. If you are using it as a kind of `any`, then it is probably evil, but if you are using it to indicate a map of strings to a type, then it's perfectly fine.
keyof is great for narrowing the possible until you start passing those keys around the type system.
Template unions are also great for pumping out a giant string enum (though there is a definite people issue of making sure you're only allowing what you want to allow), but if they get passed around the type system for use, they are probably evil.
Interface merging is evil. It allows your interface to spread across multiple places making it hard to follow and even harder to decide if it will make your code slow.
Overloads are evil. They pretend you have two different functions, but then just union everything together.
Conditional types are evil. They only exist for creating even more complex types and those types are basically guaranteed to be both impossible to fully understand and allow very slow code.
Mapped types are evil. As with conditional types, they exist to make complex an incomprehensible types that allow slow code.
Generics are the mother of all that is evil in TS. When you use a generic, you are allowing basically anything to be inserted which means your type is instantly megamorphic. If a piece of code uses generics, you should simply assume it is as slow as possible.
As an aside, overloads were a missed opportunity. In theory, TS could speed everything up by dynamically generating all those different function variants at compile time. In practice, the widespread use of generic everything means your 5mb of code would instantly bloat into 5gb of code. Overloads would be a great syntax to specify that you care enough about the performance of that specific function that you want to make multiple versions and link to the right one at compile time. Libraries like React that make most of their user-facing functions megamorphic could probably see a decent performance boost from this in projects that used TS (they already try to do this manually by using the megamorphic function to dispatch to a bunch of monomorphic functions).