> That means that when one side of the API changes, the other side won't even compile until it's updated to match.
Coupling your front end and back end in this way can give you false confidence in making API changes. If you control both sides of the API on a web app that doesn't have too many users yet, then this can be very productive, but consider this situation:
You've made a breaking API change on the back end, and thanks to your type system, you make the corresponding front end changes as well, which is great. You deploy your code, and the back end changes take effect immediately. Visitors who were in the middle of a session are still running the old front end code, and their requests start failing. Cached copies of the front end start failing too.
You can engineer around this but it's better to have a system that doesn't make introducing breaking API changes too easy. It should be painful.
When you control both front-end and back-end, you don't have to make a breaking change that will affect anyone at all, if you're willing to do a multiple deploys and maybe wait a little bit.
If your want to change from A to B, your first deploy is deploying B without stopping A from working. It could be a totally new endpoint, or you could just throw a version field into the request body or querystring. Once everyone has switched off of A, you can remove it from your codebase so only B remains without actually affecting anyone's session.
If you don't want to pollute all your querystrings or request bodies with version fields, you could have an optional newVersion boolean field that only needs to appear during a switchover or make a temporary endpoint for updated clients to hit instead of the old endpoint:
# state at start
/foo <- uses API A
# deploy new endpoint
/foo <- uses API A
/foo/new <- uses API B
# wait for everyone to swtich to /foo/new, then deploy API to /foo, removing support for API A (this could be split into two deploys if you wanted)
/foo <- uses API B
/foo/new <- uses API B
# wait for everyone to switch over to /foo, then remove /foo/new
/foo <- uses API B
MMO games like Guild Wars handle this problem by deploying a new server version in parallel to the running old servers and keep sessions in progress connected to the old servers. New sessions will connect to the newly deployed servers, while old sessions eventually "dry out". Old server processes with no active connections shut themselves down.
IMHO that's an easier solution than having to implement a backward compatible API or other API versioning solutions.
And if you do a gradual rollout, newer clients can connect to older backends! And if clients can talk to each other, there can be THREE versions involved. Also: rollbacks.
I’m curious if anyone has a system for managing this that they love. I’ve pretty much only seen painful ways.
At <dayjob> we use flow types and graphql, so like OP our frontend will fail to compile if we make a breaking change. To assist with the backward-compatible-during-deploy issue, we additionally have a teensy bit of tooling that comments on our PRs to indicate dangerous API changes.
It's not perfect (I've ignored the comments before, thinking I knew better... I didn't), but it seems to help.
It wouldn't be difficult to make it more sophisticated, and then completely block PRs it knew weren't backward compatible, but we haven't seen a strong need to do that yet.
I think in an ideal world you would do API changes to a live system in much the same way as you do with a live Database Schema: Carefully, and in a backward compatible manner.
In https://game.bitplanets.com I have `gameVersion` that will change when I make breaking changes. Then the client kind of understands that the client version doesn't match the server version and it will refresh.
I second the sentiment though. I wouldn't choose to use plain JavaScript over TypeScript for any significant project.
You might ask why I wouldn't just use a more fully typed language like Java. My main response is that I love having the flexibility to choose how strict the types should be. Sometimes, typing something out fully is not worth the trouble. Having escape hatches lets me make that choice. While I enjoy using types to turn runtime bugs into compile time errors as much as possible, it's not the right thing to do 100% of the time.
> it's not the right thing to do 100% of the time.
What is an example of this in your experience? I am trying to think of one, but after 25 years of programming, I cannot say I have ever encountered a case where it was worth using ‘an escape hatch’ that did not bite us later on. Maybe it is the type of software/clients I work with, but ‘the trouble’ is not really something I have seem before I think.
Foregoing types isn't like other escape hatches. Sometimes you'll get odd, hard-to-understand errors from the typescript compiler. On my team, we just use Babel, so we can write TypeScript (and get the benefits of types) without have 100% statically-correct code. Getting from 99% to 100% statically correct is time spent detracting from other more valuable contributions to the business, that don't have a meaningful impact on code quality. Diminishing returns.
The OP mentioned this in the context of ‘why not a fully statically typed language’, aka a language that, unlike typescript, does not return hard-to-understand errors; given such a language, what is an example that falls in that 99-100% 1%? I do not know of such an example and I am curious.
One example is that TS does not enjoy you going from string enum keys to an actually typed enum, but you can fake it by casting to ‘any’ first and then to the new type.
Sorry, but that was not the question; the question was about examples on a statically typed language like Java (or Kotlin, C# etc), which is what the OP mentioned.
Statically typed languages do not 'enjoy' going from strings to enums, which is excellent news by the way, but you can convert them Enum.TryParse() to handle them properly. I cannot really see how it is 'more work' to do that; the 'any' solution seems not much easier compared to C# TryParse or even the TS one;
var color : Color = Color[green as keyof typeof Color];
Both C# and Kotlin have a 'Dynamic' type for just that escape hatch. IMHO, That's a better way over TS - since types might not be the right thing 100% of the time, but they are 95% of the time.
I programmed professionally in C# for about eight years and in that time I can count on one hand the number of times I saw the ‘dynamic’ keyword. Nobody seems to think it’s worth the effort or confusion.
In a past life I did some real wizardry with dynamic in C#, drastically improved performance on a thing that had been using strange combinations of T4 compile-time templates, Reflection.Emit, and some very kooky multi-database lookups with a custom IDynamicMetaObject provider.
It was definitely worth the effort then. I also recognize why it is a very unlikely scenario to repeat.
The other place where I used dynamic most heavily in C# was that it was really great for IronPython interop where you could have a bit of "data science" done in a Python module, build that to a .NET assembly and call into the Python module from C#. (Though the other direction was more common, using IronPython to do data science around a bulk of C# models and code.) With so much more data science since that past life of mine having moved to Python, it's almost a shame that IronPython hasn't really kept up.
Agreed. It’s also not as easily used as “any” in TypeScript. I would probably reject a C# codebase that uses dynamic widely. It’s only good for very local use like some serialization or deserialization.
That's my experience as well, I just wanted to note that the escape hatch exists.
Maybe I should have amended my comments for 99% of the time? Or perhaps that there are other functionalities (e.g. tuple types, reflection) to to make working with types almost as easy as without? I use reflection very frequently...
However, in JavaScript-world, where the external types you interact with - from other libraries, from APIs, from the browser itself - are poorly defined, having that escape hatch becomes a sanity-saver, does it not? That's where the power comes from: the squishy, fuzzy, less structured edges.
The parent-parent comment was "I want an escape hatch in the language" and then the argument for it is "someone else used an escape hatch, and now I need an escape hatch". :D
Well, yes, most of the packages available in the JS ecosystem were written in JS. The easiest way to retroactively write typings is to use `any` all over the place.
I saw other commenter say that the benefit of using TS in the frontend and backend, then DB updates in the backend also update the frontend code: "The beauty of Gary’s setup is changes to his database update the types for the server side Code AND client side code. The value of that comes from using the same language in both platforms"
Can you get this sort of benefit with Kotlin? How?
I would love to do frontend Kotlin, but is it one of those things that you end up painting yourself in a corner and wishing you used TypeScript all along?
I have only played a little with the Kotlin web stuff, but no I would not recommend doing anything serious with it.
I think JS/TS are king because of the ecosystem and wealth of documentation. My experience with more "niche" front-end languages is that you have to do much more yourself or have lots of messy interactions with plain JS libraries.
But even if Kotlin JS had an enormous ecosystem, I'm not sure I would love it. What I love about Kotlin is that it is a great replacement for Java. Interoperating with java has zero performance overhead and zero mental overhead. Because of this and Kotlin's general closeness to Java, many more places writing JVM code are comfortable with using Kotlin compared with other JVM languages like Scala or Clojure. Kotlin cleans up the Java syntax and adds nice features like type inferencing, non-null by default, coroutines, blocks, extension functions and so on.
But if we're not on the JVM, I'm definitely not sold on Kotlin as the perfect language or anything. Sealed classes are nice, but still not as nice as tagged unions. As far as type systems go, I think the one in TS is more flexible and expressive.
Like Kotlin, what I like about TS is the zero mental overhead integration with JS and the JS ecosystem. But if we're not tied to a browser, I guess I wouldn't ever really want to program in TS.
Node is kind of remarkable in that it is the backend platform that is most similar to a browser, which makes these full-stack setups a little more natural. Not sure I'm sold on their value in general since most teams are going to be big enough that front-end and backend code are written separately. If you want to keep types in sync, I think graphql or protocol buffers are better solutions.
It's early days. 1.3.70 was only released a few weeks ago and featured a major overhaul of the kotlin-js related tooling.
I gave it a quick try a few weeks ago and it's actually nice. Before that, it wasn't really usable (dead code elimination was a problem, webpack integration kind of sucked, etc.). With the new tools, it's stupidly easy. It uses webpack and karma underneath but it's all driven from gradle and it works without any webpack or karma specific configuration (you can customize of course). You can depend on both npms and kotlin-js jars. There's a gradle plugin under development that translates typescript declarations for npms to kotlin. So, the development experience is pretty nice. But it will take a while for this to mature and get more users. IMHO the upcoming 1.4 release might kick off some adoption.
IMHO what it needs is proper kotlin frameworks for frontend. Yes, you can use vue and react, etc. But they are kind of optimized for people used to untyped blobs of json. Slapping the word state on this does not quite get you to a level that would be comfortable to a Kotlin developer that also develops for Android. IMHO, it's just a matter of time before people discover 1) you can now do frontend with Kotlin and it doesn't suck 2) we should just scrap the few hundred kb of code that actually gets imported at runtime (i.e. react and related cruft) and build that up from the ground and 4) there's a ton of stuff that was already done for Android that ports over relatively well to the browser world. That's actually a lot less work than it sounds like because the runtimes for frameworks like react and vue are tiny.
Reimplementing react seems to be a rite of passage these days for a lot of people and doing that in Kotlin doesn't sound like it should be that big of a deal. You might be tempted to just use the DOM directly and people do that as well. State management when you have proper data classes, co-routines, channels, etc. becomes a whole different game than throwing balls of untyped json around via global variables.
IMHO this is actually true for typescript as well. The next logical step for typescript would be to cut loose from its js legacy and start having it's own frameworks.
That is usually the outcome of not using platform languages, semantic mismatch with the platform, not able to use the libraries without some kind of FFI or wrapping code, needing extra tooling for debugging.
TypeScript gets away with it, because it follows the C++ Trojan route, without the type annotations it is just JavaScript.
I might be very critical of how WebAssembly gets sold, but one thing is certain, unless browser vendors decide to rip it off, with WebGL/WebGPU + WebAssembly, a few of us will be getting our beloved Flash, Silverlight, Java back, even if in different kind of clothing.
Not a pro C# dev but I use it quite a bit - IIRC the correct keyword for that situation would be 'var' which is similar to 'auto' in C++. It just statically infers the type from the assignment, while 'dynamic' actually does runtime casting.
I use 'var' all the time, I use 'dynamic' only when working with JSON objects and such.
I would avoid using dynamic (I believe underneath it is basically the same as the Dictionary<T, U> anyway, howeve r I don't really care to look it up).
Also regarding var. It is best IMO to use var when evident i.e. If the var right hand side is obvious what the type is use var, if it isn't use the Type name.
e.g.
When you should use var (it is evident from the right hand side)
var name = "Joe Bloggs";
vs when it isn't evident:
var myVariables = _service.SomeMethod();
It allows the code to be better read without the IDE. However obviously it is up to you.
> Use implicit typing for local variables when the type of the variable is obvious from the right side of the assignment, or when the precise type is not important.
> Do not use var when the type is not apparent from the right side of the assignment.
> Do not rely on the variable name to specify the type of the variable. It might not be correct.
Your example has me inferring the type from the method name which isn't dissimilar to what they are advising against here.
TBH I should have known I would get a comment like this, where everyone lives in utopia and everything is always obvious to another person, nobody ever had a bad nights sleep or not feeling their best etc etc.
There are plenty of times where it may not be obvious what the Type it is returning even with quite decent coding standards and in any event I think it helps readability and isn't a huge ask. You can get Resharper / Rider to do this as a project setting (Visual Studio can probably do this out of the box now) in your repo and then the IDE will just do it for you.
MS recommends suffixing `Async`, and VS by default names fields with underscore but the C# naming suggests Pascal. So referencing MS docs is hardly justification.
So? because you disagree with some parts of it, all of it is a bad idea? That is a poor argument.
I've justified quite clearly why I think it is a good idea and in my original post I said quite clearly "It is upto you". It is trivial to turn on in the IDE and it improves readability outside of the IDE.
That is a cool project but you’d still have to keep it in sync with your frontend code. The beauty of Gary’s setup is changes to his database update the types for the server side Code AND client side code. The value of that comes from using the same language in both platforms
You aren't bound to use the same language on both platforms to get this functionality. I had this running 3-4 years ago using C#, EntityFramework, and a d.ts generator that I forgot the name of.
A change to the DB schema just required me to refresh our edmx from the DB schema and then build.
That's not true. There are plenty of cases where certain pieces of software is cordoned off and/or non-critical that would not affect the rest of your stack if they fail.
Lifeguards are expensive. Basic typing is damn near free. More languages should adopt dependent typing, but for the moment there are languages that lack it and offer enough advantages to make up for that. I don't think you can say the same thing for languages without a true type system (one in which unsoundness is the exception rather than the rule) - there are too many good alternatives for it to be worth compromising on that.
It makes it really easy to add type safety to parts of a legacy project as you go along; for new projects you can enforce type safety with a linter or stricter compiler settings.
A more accurate remix of this analogy (for the TypeScript case) is that it is like building a wall around the part of the pool that you actually swim in, and allowing that folks can pee all they want as long as it is outside of the boundary you built.
Does Typescript actually add runtime checks, even for e.g. container types? I'd be very surprised if it did, because that would mean a lot of overhead.
no - proper unit testing mocks dependencies, so untested dependencies shouldn't affect tests. However, if there are typing issues in your dependencies, even the code that strongly typed can crash as a result if it uses those dependencies.
If you control both ends of an API, there's no reason not to use Thrift or equivalent (gRPC) rather than untyped JS. Even if you don't believe in typing in general, an API boundary is exactly where it's most important to make sure both sides agree on what the interface is.
Do I read the chart correctly and their whole Codebase (minus dependencies) is less the 15k lines of code? Porting 6.5k likes of ruby in 2 weeks sounds reasonable, but how doing migrations 10x or 100x that size is far more challenging.
TypeScript has been on HN for different reasons over the last few days. The main thing that has stood out, from the flame wars erupting over it is that it
is a truly divisive idea. To some, it gives the same securities and checks provided when working with a statically typed language. To others, it curtails the power and expressiveness inherent in JS, the dynamic, functional language it compiles down to.
TypeScript does provide a lot of benefits, but for it to truly succeed and take over in the JS world would mean it has to truly also reflect, and enable JS's functional and dynamic roots. JS got where it is by being JS.. an extremely flexible and accommodating language. HTML got this far today by doing the same, just google XHTML if you doubt that. So for TypeScript to succeed where CoffeeScript failed, this may be the direction it needs to lean more towards: being less divisive, and inviting all kinds of programming paradigms to the party.
Is that really all it is though?
Cases in point: Java Applets, VB, Flash, Dart ...
All of these were at one point or another "built into the browser", but where are they now? Give credit where it is due, the success of the web as a platform lies not in its technical superiority over alternatives, but in its inclusiveness and flexibility. Any tech that is trying to replace HTML, CSS and JS in this regards will seriously have to consider and accommodate this, or suffer the same fate of hundreds of other pretenders to the throne. Long live the king! Long live open, approachable and flexible tech!
Exhibit A: List of very different and diverse languages that compile to JS ( https://github.com/jashkenas/coffeescript/wiki/List-of-langu... ). If this does not demonstrate the flexibility and malleability of the language, then I do not know what does. Take web assembly for example, which has been around for over five years now, and was specifically designed as a "compile to" language. How many languages compile to web assembly in comparison?
Any tech that is as divisive as TS is simply not going to get far. Flash ActionScript was massive 10 years ago compared to any alternative to JS tech today, and where is it now? The creator of TS even quotes ActionScript as one of the main inspirations for TS. ActionScript even had a more powerful version of Reacts JSX (ES4), where is it today? CoffeeScript was all the rage 5 years ago, JS simply absorbed all its good ideas, where is it today? The things that last, that stand the test of time are the things that are flexible and accommodate different ways of doing things. For your beloved TS to stand the test of time, it has got to accommodate the whole JS eco-system, not just those who favor the static object oriented way of doing things.
I'm not sure why you're talking about "object oriented". TS, in and of itself, has nothing to do with OOP. You can write functional-oriented code in TS same as you would in JS. TS doesn't mandate use of classes or anything like that.
As an example, I can write a React+Redux app in TS, and be writing 100% plain functions (components + reducers) the entire way through.
From my viewpoint, TS has more than hit enough critical mass to survive for the long term:
- Microsoft is heavily invested in its ongoing development
- The Angular community requires use of TS
- The React community has split in general between types and no types, but a recent survey of /r/reactjs readers indicated ~50% of React devs are using TS [0]
- Where CoffeeScript introduced new syntax entirely, TS's focus on being a superset of standardized JS means that there's both less to worry about compat-wise _and_ it can be seen as a way to use new language features instead of Babel
FWIW, I wrote up my thoughts on learning and using TS as both an app dev and a Redux maintainer [1], and I'm sold on using going forward.
OOP is not just about using classes, just like functional programming is not just about using functions.
They're different programming paradigms, each with their own patterns, strengths and weaknesses. Idiomatic TS favors an OOP style, which is why there are event libraries out their [0] whose sole goal is to enable using TS the functional way.
> The Angular community requires use of TS
Angular was once the king of the JS frontend, but now it has more or less been reduced to a certain niche in the market. It is no accident that it is very popular among those with a Java background, with Java being a very good example of a static OOP language.
> The React community has split in general between types and no types
That split is being caused by TS. This is not a good thing for the eco-system as a whole. The idea that everyone will be forced into adopting TS because of the power of MS is the very idea that will lead to a backlash, just like the backlashes against XHTML and Java applets.
Is that really all it is though? Cases in point: Java Applets, VB, Flash, Dart ...
All of these were at one point or another "built into the browser", but where are they now?
VB was never built into all browsers. As far as Flash and Java applets, they were popular until they stopped being supported in all browsers - namely the iPhone and iPad.
Exhibit A: List of very different and diverse languages that compile to JS ( https://github.com/jashkenas/coffeescript/wiki/List-of-langu.... ). If this does not demonstrate the flexibility and malleability of the language, then I do not know what does.
Almost any language can be transpiled into any other language. That says nothing about any trait inherent in the language.
Flash ActionScript was massive 10 years ago compared to any alternative to JS tech today, and where is it now?
Apple singlehandedly killed Flash by not supporting it on iOS.
CoffeeScript was all the rage 5 years ago, JS simply absorbed all its good ideas, where is it today?
CoffeeScript was never championed by a company that has the revenue and clout that MS has and never got the adoption. Please don’t mention all of the initiatives that Google tried. Google isn’t exactly known for its ability to manage a platform.
It is actually quite frustrating compared to some other languages, and more often than not you see that its just `JSON.parse` and hope for the best. I wish we had something like Rust's Serde or Haskell's aeson.
If you want to validate data coming over IO then your options are data validation libraries such joi, io-ts, and yup. You have to write seperate data validators on top of your types. io-ts has a way of deriving types from validators, but io-ts is often seen as quite intimidating being built on top of fp-ts.
No matter how carefully you maintain strict typing within your typscript project the moment you hit IO everything is basically `any`.
Some projects like openapi-generator might generate some validators for server respones, but I've not seen any good generators that do actual validation.
I'm not sure if apollo-graphql does responses validation? Does anyone know?
I also liked runtypes more, but it turns out composing io-ts codecs is perfectly readable, and being able to deserialize at the same time/instead of validating is very convenient.
Apollo does indeed validate responses as well as requests. If your server tries to send an object that doesn’t match the schema, you’re going to return just a 500 error, which means your clients can trust the graphql types implicitly.
Since in my current job I can’t use graphql but have to type everything in openapi, Had to resort to writing my own lib to get to the same functionality for a REST backend in TS - https://github.com/ovotech/laminar
My personal take is that io-ts _is_ typescript’s aeson, but if you’re scared of the baggage it comes with then how about giving zod a try? https://github.com/vriad/zod
For my current project, it is handled by GraphQL. Using a combination of type-graphql and code generation, I have fully round trip static types on both client and server, by only defining my model once. Even my graphql queries are checked.
It is a bit of a pain to setup, but once it is done, I've found that it works pretty well.
I use a combination of class-transformer[1] and then class-validator[2].
First transform the JSON into proper class-objects and then validate the object for correctnes.
IMHO this has nothing to do with TypeScript because in your runtime all the type information is gone and you need to validate the remote data never the less. TypeScript can help you structure your validations but you need to write them.
I use tcomb-validation[1] to validate incoming data. The downside is that it cannot use TypeScript types so you have to define the schema twice.
For me, the duplicated types turned out to be less of a hassle than initially anticipated. In many cases, I ended up with different data structures anyway because the data is transformed in the client. I think of them as transport types and client types.
I watched them go through this process via twitter, and it sounded like inventing that shared api model was a type puzzle i wouldn't be able to figure out myself. also they're the only people i know of doing it?
Did it not occur to them to, I don't know, "test" the API when they make changes? A compiler or stricter type system may help prevent certain careless errors, but not all (or even most) of them, while a proper test scheme will catch all such errors.
My snarky tone was unwarranted but I'm not actually assuming cluelessness here, nor did I come there quickly. I find in practice that TDD and what I think of as "testing" are quite orthogonal.
> With the Ruby backend, we sometimes forgot that a particular API property held an array of strings, not a single string. ... These are normal dynamic language problems in any system whose tests don't have 100% test coverage.
TDD in general focuses on code-adjacent test strategies like unit tests. In the Ruby TDD world it's a popular strategy to test first, code second at the level of classes or even individual methods. In practice this"tests = code = tests" philosophy produces both more code and a focus on metrics like coverage that only measure "for how much of my code do I have other code that asserts that my code is doing what the other code says it should be doing" rather than ensuring "my code is actually doing what someone else needs it to do".
"Testing" as I intend the term means using the software to do whatever it is supposed to. For a server side API that probably means consume it via a client. Any client that relies on the type of a property being an array instead of a string will blow up (except perhaps Python, grr). Any reasonably complete smoke or integration testing regime should expose this problem, but more immediately I think developers should be actively testing the thing they are changing while they are changing it. Personally I dislike compilers and restrictive type systems in large part because they _inhibit_ this sort of rapid, iterative testing and fixing. Partially functional dynamically typed code is far more useful to me as I work through a series of related changes than statically typed code that requires I fix all the issues it perceives as important before I can keep going on what really matters.
> Any reasonably complete smoke or integration testing regime should expose this problem
But that's not without cost. At least in the Ruby world (and Bernhardt holds/held this view) integrated tests are to be avoided where possible because it creates a negative feedback loop of slow tests and an exponential number of tests that need to be written to achieve equivalent coverage.
> but more immediately I think developers should be actively testing the thing they are changing while they are changing it
This smells like the age-old discipline/rigour/professionalism platitude often trotted out by proponents of Software Craftsmanship. I'd rather embrace the fact that humans make mistakes and optimise for that, rather than hold people to unreasonable standards.
Furthermore, while I agree that people should be testing the thing they are changing while they are changing it, you appear to be implying that only one form of testing is acceptable here. Why not test the type signature? Why not a formal proof of correctness? Why not a property-based test? There are many ways to improve the chances of a piece of software to work. Restricting ourselves to only one of those ways is, quite frankly, dumb.
> Personally I dislike compilers and restrictive type systems in large part because they _inhibit_ this sort of rapid, iterative testing and fixing.
That's a fine opinion to have, but that's all it is. I've worked professionally with dynamic languages, and I hold the opposite view. I work with a few projects totalling about 60,000 lines of Haskell, and I feel the language enables rapid, iterative testing much more than Ruby ever did for me. Of course I can't prove this empirically, which is why my opinion will only ever be as good as yours, and vice versa.
> Partially functional dynamically typed code is far more useful to me as I work through a series of related changes than statically typed code that requires I fix all the issues it perceives as important before I can keep going on what really matters.
I'm sorry, but this is plainly incorrect. The ability to defer type errors to runtime certainly exists in Haskell, and I expect not exclusively.
If you can come up with a formal proof of correctness for an evolving API then you are clearly operating on a higher plane of software development than us mere craftspeople, and I certainly wouldn't presume to talk you out of trying.
Porting to a typed language helped prevent type errors in a typeless language. Who woulda thunk?
I am currently also working on a project written in a JS backend that's slowly porting to TS, for the same reasons, but I still would prefer to go back to C# and take it a level further. I just don't enjoy the TypeScript language all that much.
> Porting to a typed language helped prevent type errors in a typeless language. Who woulda thunk?
You say this sarcastically, but every JS-related thread on HN turns into a flamewar between type lovers and haters.
The type haters make exactly the argument you're mocking: that typeless languages do not result in more type errors. Their reasoning is along the lines of, "I don't need a type system because I'm a professional and don't make mistakes."
That sounds laughable or like an exaggeration, but it's a surprisingly common line of thinking. The buggiest code I ever worked on was a PHP code base written by someone who had been coding for 20 years and had a Master's in CS. Before I inherited the code, he told me, "I code in the terminal. I don't need IDE features because I don't really make mistakes in PHP anymore."
That of course is not the argument at all. The argument is that the cost of working with static types exceeds the benefits. Finding type errors is obviously a benefit, while not finding them (or finding them only at runtime) is a cost. The question is what other costs and benefits there are, and how to compare them. Nobody agrees on that, nor ever will.
This argument will never be resolved because it's dominated by psychological factors. Once you've adapted to environment A, A's costs (e.g. hoops the compiler makes you jump through / time spent tracking down type errors) become habitual and you don't notice them anymore. Meanwhile the costs of unfamiliar environment B are extremely highlighted in your attention. Similarly, if you like and identify with A, its benefits will be top-of-mind for you, while if you don't like B (and nobody likes B), you'll discount its benefits.
It gets worse. Sometimes benefits are costs until you make it through a learning curve. An example is parentheses in Lisp. They stick out like a sore thumb until one day they don't. Later you realize that they were a liberating force all along, and you now have anti-gravity powers you never dreamt of. (<-- This is an example of how people talk when they are adapted to an environment whose costs they no longer count and whose benefits are top-of-mind for them.) We can't even agree on what the costs and benefits are, let alone how to measure them.
It's no wonder that people feel so strongly about this dispute. When you don't count the costs of your preferred environment, and only count the benefits, it gets obvious pretty quickly that other people are idiots. For some reason the idiots feel the opposite and perversely insist on it.
It gets worse. Imagine that you're a decent, fairminded sort and so, magnanimously as is your wont, you decide to give the idiots a fair shake—you know, just to verify that you're being fair and whatnot. You fire up whatever it is (Clojure? OCaml? needless to say, you make a charitable choice) and start programming for a while. What happens? All the costs of unfamiliarity hit you in the face. Not one of the benefits that has lived at the top of your mind for years is anywhere to be seen. This thing doesn't even catch trivial type errors at compile time! / I can't even execute this bit of code in a REPL! This is a painful experience. It can even be an enraging one, for example if you are forced for some extraneous reason to work on a system you didn't create using tools you don't like. Most people who go through that experience once have their views solidified for life. Meanwhile someone else had the opposite experience.
Given how passionately people feel about this and the certainty in forum threads about it, it's curious (or maybe to be expected) that the question of published evidence comes up so rarely. HN user luu did the definitive survey on this: https://danluu.com/empirical-pl/. The short version is that such studies as there are don't address the core question and/or find insignificant effects and/or their designs tip obviously to one side or the other (and I do mean obviously, like comparing Peter Norvig's code to that of random undergrads).
Bias dominates every other factor, put together, times at least 10. As long as that's true, we can't say anything rational without genuinely accounting for bias. Do we actually know how to do that? The studies don't seem able to. These debates certainly don't, nor do they try. Maybe the interesting phenomenon here is actually the bias itself. Maybe it's not that we like an environment because we're more productive in it, but that we're more productive because we like it. Maybe we should get better at liking things.
The debate about this tends to remain stuck at a low level because once you've been around the block enough times you realize that it hurts to bang your head against a wall so you stop. Thus the vocal population undergoes a perpetual exodus of the experienced, but makes up for it with a fresh supply of enthusiasts, reminding one of the adage, "I arrive at the office late, but make up for it by leaving early."
Great comment. I am part of the minority that is dreading being forced to use Typescript some day, at least until they come up with a solution to annotate partial function application and the like.
TS solves problems I don’t have, and promotes a style I don’t want. I don’t want thousands of global symbols in hundreds of files in my project. At least TS has the decency to infer types in some cases, or at least put the type clutter after an identifier instead of before it, but it’s usually only superfluous clutter to me personally.
I realize that many people like TS. I just hope they understand why some of us are less enthusiastic.
I think you mean currying, right? It's been supported at close to the same level as Haskell for 3+ years[1].
> TS solves problems I don’t have
To me, it solves: 1) devs getting lazy on a team, 2) catching my mistakes before runtime, 3) correctly using library APIs, and 4) refactoring.
All of these might be low-value to you, though. I think #4 is the biggest time-saver because it enables automated refactoring (through the IDE) that isn't possible with plain JS.
> I don’t want thousands of global symbols in hundreds of files in my project.
I'm curious what you mean by this. Are you saying that TypeScript creates global symbols that weren't there before?
You can configure TypeScript so that it doesn't have any symbols from outside your project at all. For example, you can set it to import the types found in Node or on the web, or you can just skip them.
Yes, implicit currying where supplying the final arg actually calls the underlying function is one kind of partial function application. Ramda is one thing I use.
I also use an in-house library to explicitly apply arguments to return a a new function, which allows you to produce 0 arity or variadic functions.
I’m glad somebody has a work around for using Ramda in TS, but it looks every bit as “fun” as generics in Java, which I think have wasted more of my time than they have ever saved.
It is essentially correct that the discussion on typing occurs at the least nuanced level of discourse. I've been at fault doing this myself. The distinction between different sorts of type systems and the relative conservation of the different analyses required to produce well typed programs is rarely brought up. I have a general saying that you shouldn't listen to people unless they can explain a point of view in terms of its tradeoffs. Very few real problems have absolutist solutions. I'm currently working my way through Types and Programming Languages and hope to one day present a more technical and thought out explanations of the tradeoffs present amongst untyped and the various levels of typed systems.
Like you, I think the truth is between the extremes. My original post was just pointing out that the extreme of type-hate is not uncommon here. The opposite appears to be true as well.
I'm not sure I do think the truth is between the extremes, but if I came across that way I did a good enough job of it for the purposes of writing that comment!
It's not really helpful to use a label like "type-haters" or "type-hate". I mean, it's helpful in the sense that it shows how tribalistic the arguments are (just replace "$X-haters" with enough Xs and you'll get the idea). But other ways of talking are more helpful.
On a related note: don't you think it's interesting how personal it feels? To some extent that's because the people who don't care that much select themselves out of the conversation to begin with. But I think it's fascinating even after pricing that in.
> It's not really helpful to use a label like "type-haters" or "type-hate"
> don't you think it's interesting how personal it feels?
I take your point and will avoid it in the future. I used it precisely because, as you say, people are very emotional about it and take it personally. I meant to imply that these were not necessarily good-faith arguers having a rational discussion, but rather participants in a flamewar. This general thread seems to be a mix of both reason and flame.
As for why it's personal, I don't know exactly and do find it interesting. Part of it is likely that people have invested a lot of time into one stack or another, and that becomes an emotional issue. If I spent 10 years becoming a PHP expert, I'll be more upset if someone says PHP is garbage. It suggests my skills or prospects are poor.
After that, I think people are saying things like, "People who use TypeScript are stupid/lazy/etc.", or "People who don't use TypeScript are stupid/ignorant/etc.". Then the emotion may come from the conflict itself, not from the actual position someone is holding.
Now I'm sure that many feel like that, but I don't think it's the majority, but I also don't know.
Here are some more arguments against types (don't necessarily agree with all of them myself, but for reference in the future when you want to write from the perspective of someone who feels more productive without types):
- There are other, more flexible ways to solve the same problems you solve with typing, the clojure.spec way is one of them
- You lose flexibility when suddenly everything is locked to each other by name. Even if Account and Person both has a first and last name fields, you can't use both of them if the function is expecting a Account and only use the first and last name. Then you need to add an interface, name it something, then make Account and Person be based on that. Now the function is locked to the interface, and so on
- Type checking is a very basic form of testing that doesn't solve the most common, annoying and hard-to-track down bugs, logic bugs.
- Metaprogramming becomes harder. Not impossible, just harder.
- Types (at least in TypeScript and other languages I've been using in the past) disappear in runtime, so you can't really use them. It's just a tool for you to tell the compiler what something is so it can say "no, you did wrong here"
Now I'm neither a type lover nor hater. I just want to use the right solution for the right problem, sometimes, that's making a program that will take longer time to write but specification is set in stone and requirements won't change, so types will help you a lot. Other times, it's in a very dynamic and experimental environment where you have to be able to apply changes as quickly as possible and minor errors don't matter as much, as long as you can move forward fast and test theories. Then you can go back and refactor things.
> You lose flexibility when suddenly everything is locked to each other by name.
This is only true of nominally typed languages (such as... most typed languages). TypeScript is a notable exception: values just have to have the same property names and types and they'll fit. The language's philosophy is basically to describe the way people write JS rather than prescribe how people should write it, so idiomatic JS is usually easy to express in TypeScript.
There are some caveats to the structural type system in TypeScript. The most glaring of which is multiple versions of the same typings will not align with each other regardless of sharing the same shape.
That's good, compared to the nominative type systems, on that point at least!
I'm just trying to help the person I'm replying to to see more arguments in favor of dynamic systems without types, not necessarily straightly aimed at TypeScript as they mainly mentioned just types.
> You lose flexibility when suddenly everything is locked to each other by name. Even if Account and Person both has a first and last name fields, you can't use both of them if the function is expecting a Account and only use the first and last name. Then you need to add an interface, name it something, then make Account and Person be based on that. Now the function is locked to the interface, and so on
I see this argument frequently, and it doesn't make sense to me. Take the following function below in JavaScript and Typescript:
function fullNameUntyped(obj) {
return `${obj.firstName} ${obj.lastName}`;
}
type FirstAndLastName = { firstName: string, lastName: string };
function fullNameTyped(obj: FirstAndLastName): string {
return `${obj.firstName} ${obj.lastName}`;
}
In both cases, the function's argument is bound to the interface represented by FirstAndLastName. The difference is that TypeScript allows you to be explicit about it and can statically determine if your code conforms to this. Without static type checking, you're implicitly bound to that interface.
Also, you seem to have a misunderstanding. TypeScript is Duck-typed. This means that interface compliance is based on structure, not name. So the following code is correct and type-safe:
class Person {
personId: number;
firstName: string;
lastName: string;
constructor(firstName: string, lastName: string) {
this.personId = 0;
this.firstName = firstName;
this.lastName = lastName;
}
}
class Account {
accountId: number;
firstName: string;
lastName: string;
constructor(firstName: string, lastName: string) {
this.accountId = 0;
this.firstName = firstName;
this.lastName = lastName;
}
}
interface FirstAndLastName {
firstName: string;
lastName: string;
}
function fullName(obj: FirstAndLastName): string {
return `${obj.firstName} ${obj.lastName}`;
}
let person = new Person("John", "Doe");
let account = new Account("Jane", "Doe");
console.log(fullName(person)); // Prints "John Doe";
console.log(fullName(account)); // Prints "Jane Doe";
Both the Person and Account classes satisfy the interface FirstAndLastName in structure, but they don't have to reference it at all.
> Type checking is a very basic form of testing that doesn't solve the most common, annoying and hard-to-track down bugs, logic bugs.
First of all, challenge. Types can help with logic bugs just fine, but also my guess would be that type bugs come up more often. This includes typos, using the wrong variables, incorrect function argument order, etc. But I'd like to see any reference for that kind of claim anyway.
Second of all, no one says you can't still can't write tests even if you're using TypeScript.
Types are some work up front but like any tool once you learn to use them it isn't really a hinderance on prototyping or experimenting-- if anything they make it easier for me. They also make it easier to refactor, since static analysis tools can help with many refactoring tasks as well.
Ive been using Python for +16 years. Most of that time, my development was using Vim with no sort of autocompletion or even a linter. After a few years away from Python, I'm back heavy into it, currently using 3.7. I have totally embraced type hints and automatic linting in an IDE (mostly use PyCharm, but sometimes Vs code). I was always pretty productive with Python, but type hints have made me even more productive. (It also helps that I'm currently doing green field development).
I like that they're optional and not enforced. However, I also like that my IDE visually indicates when I've violated that contract. Has boosted my effectiveness quite a bit. Ive defaulted to type hints almost everywhere, especially in any library code I write (I also try and write useful docstrings). For me, its especially important because I'm the most senior Python dev on the team, the other 5 are experienced devs, but just learning Python. Me taking the time to do the type hints and docstrings is immensely helpful to them, as they all come from statically typed languages.
That said, this is my opinion. That's what this "argument" comes down to. Arguing over opinions and tastes. Each has their own and their own reasoning. Doesn't make either side right or wrong. Id just say go in whatever direction works best for one's self or team. There's no one size fits all solution.
[parent predicts flamewar, starts one with:]
> Their reasoning is along the lines of, "I don't need a type system because I'm a professional and don't make mistakes."
Nah. Where did you get that? That's so horribly wrong, it's not even funny.
Types are one tool in the toolbox to describe a system. It is not the most powerful or expressive one. It just happens to be available at compile-time.
It catches trivial bugs, acts as documentation, helps the compiler with memory allocation.
It also is static. And that's where the weaknesses begin. A whole class of problems that benefit from domain definitions being available at runtime are clumsy to write in languages that force types on you. Not impossible, of course, but clumsy. (i.e. place-oriented programming)
To your point:
Everyone makes mistakes. The way to catch them and to prevent them in the future is not by having types. It's by writing tests.
Tests > Types > no types at all
Did you miss the part where they said "... at runtime are clumsy to write in languages that force types on you.."
That may most likely put "types + tests" at the bottom of their food-chain. And speaking of TS, since it compiles down to JS in production, it can be considered a very basic type of testing, or an advanced type of linting.
> That may most likely put "types + tests" at the bottom of their food-chain.
That is totally illogical.
If types are better than nothing, and tests are better than nothing, then how could both tests and types together be worse than nothing?
> Did you miss the part where they said "... at runtime are clumsy to write in languages that force types on you.."
No, I didn't miss it. I just disagree with it. Interpreting values at runtime is called "parsing". The parsing step exists regardless of whether the program is written in a static language or a dynamic language.
> If types are better than nothing, and tests are better than nothing, then how could both tests and types together be worse than nothing?
Ok, move it to the left by one, my point still stands: considering his goals and experience, types + tests will most likely be almost at the bottom of their food chain.
> No, I didn't miss it. I just disagree with it. Interpreting values at runtime is called "parsing".
Actually, you completely misunderstood it. It has nothing to do with parsing. Its all about design patterns, functional + dynamic language design patterns to be more specific.
1. I disagree. Perhaps this[0] will help you understand the bits you are missing.
2. I do not understand why you are so hostile. To try and understand, I went through some of your comment history, and I can see it's not just me you are hostile towards. Furthermore, you appear to have a poor understanding of how JavaScript works. For the former reason, at least until your anger issues are addressed, I don't find it constructive to continue discussion with you.
I have not yet looked at your comment history, but just looking at your comments a few lines up, that's what I've found. And the first two are your comment to someone else. Ever heard of projection? That is what you are suffering from if you cannot see that you are just as hostile as I am.
Personally, I respond in kind, so I am very much aware of when I'm being hostile. And unlike you, I never resort to cheap personal insults about the understanding or abilities of the people I engage with. That's just beneath me, my friend.
> Before I inherited the code, he told me, "I code in the terminal. I don't need IDE features because I don't really make mistakes in PHP anymore." Again, seems satirical, but not uncommon.
Went to a tech meetup group hosted at a moderate sized local tech company. The tech company had job openings, and I'd looked at a couple earlier that month. Some of their staff came to the meetup, and I got to talking with one of the 'lead' guys. I'd recently started using IntelliJ, and was talking to some others about how much more productive I was being, and it was the best money I'd spent on anything in year.
The lead guy jumped in and basically said IDEs were for wimps (more or less). If you're a good developer, you don't need an IDE - he knew everything about their codebase, was very productive, and had been for years.
My experience was the opposite - I do a lot of freelance/consulting/dev stuff - my life is jumping between projects every few months or year or so. Having tooling that helps me understand so much more about a project than I could get just from vim (for example) was so eye-opening and transformative to my way of thinking that it made me re-evaluate a lot of previous notions, habits and assumptions about development. I'd explained that using good IDEs had made me better at my craft in every conceivable way.
He just sort of shrugged and said he could understand why some people might need an IDE, but they're mostly just crutches for people who don't want to spend the time to learn a codebase the 'real' way.
I'd been interested in applying to that company, but didn't bother after realizing I would be having to work with that attitude daily.
When I first got into Node.js I went to a Node meetup and mentioned I was having trouble getting typescript set up. They admonished me, saying you don’t need that, you just need test coverage. Ok...
> Funny, I usually see type haters claiming that they don't actually make the kinds of mistakes that are fixed by strong typing.
As your typical type hater, my preferred argument is: yes, static typing prevents a certain class of bugs from happening; no, this class of bugs is not nearly as relevant as type lovers seem to be claiming. By an order of magnitude if not more.
Source: 6 years of being a core developer of a front end framework consisting of 1,500,000 lines of ES5 JavaScript and SASS.
I'm sure that class of bugs is relevant for Typescript programmers, because they never develop the skills to not implement those bugs in the first place.
If your usual workflow is to defer all of the typing information to the IDE, rather than keeping it in your head. Then the second you can't access that information at a whim, you're fucked. It's like how I can't navigate around my city without Google Maps, because I've never had much of a reason to.
That metaphor works well, because Google Maps is probably a net positive. It's just that if I go around waving my phone in the face of a 60 year old cab driver that knows where everything is, going "I can't understand how you think you can get around without GPS, you're making loads of mistakes without realising it", I'm going to look like a fucking moron.
> If your usual workflow is to defer all of the typing information to the IDE, rather than keeping it in your head. Then the second you can't access that information at a whim, you're fucked.
This is just not at all how it works in practice. You still keep it all in your head, but the compiler errors protect you from small mistakes.
No matter how great of a programmer you are, you will make transcription errors with the things you have in your head. This is literally exactly the type of "I don't make mistakes" nonsense that I said was common, and a bunch of other people said was extreme and a strawman.
Your argument is the one that's a strawman. You're assuming that the kind of mistakes that I make while coding are the exact kind that Typescript prevents. Which is not the case.
There's more ways than type hints to fail fast. I'm not saying me and all the other JS devs don't make mistakes. I'm saying the mistakes we make are ones that we catch in 5 seconds because the modern web dev environment is set up to facilitate that. Typescript devs make additional mistakes on top of that beacuse they have tools that catch those mistakes quickly too. We don't, so we get conditioned to program in a way to prioritise not making those mistakes (possibly at the expense of other mistakes becoming more common).
The point is that someone with years of JS experience is going to be more productive with JS, and someone with years of TS experience is going to be more productive with TS. I'm not saying Typescript sucks, just that the cost side of it's cost/benefit equation isn't the same for every dev, whereas your shit argument assumes that it is because you only base it on your own experience.
> You still keep it all in your head, but the compiler errors protect you from small mistakes.
It is interesting how "protecting from small mistakes" is touted as "solving our API woes", innit?
At some point it is liberating to admit that static vs dynamic is a matter of personal preference and nothing more. If I like it, I will find a thousand reasons to justify it, and vice versa.
I just wish everybody would be open about it: I like "type safety" and the warm fuzzy feeling it gives me, and nothing you unwashed heathens can say about tight coupling, increased incidental complexity, over-engineered APIs and productivity loss can sway my opinion! Why, my productivity is _increased_ with static typing, because I make up for hours of bikeshedding about which type better conveys the underlying intent with writing less null checks and unit tests! Take that, type haters!
> I'm sure that class of bugs is relevant for Typescript programmers, because they never develop the skills to not implement those bugs in the first place.
This rings true, especially given that in my experience, most of the static typed people either never worked with dynamic languages at all, or converted from dynamic to static. Most of the dynamic minded people I know actually converted _from_ static typing after doing that for years (myself included).
The calculus is pretty simple IMO: if one has mental discipline to work with a dynamic weakly typed language, not having to mess with types and compilers is downright liberating: ye godz, just gimme that data! On the other hand the people who never experienced conditions leading to developing said mental discipline tend to abhor the idea of not having the "type safety" (cute marketing, that). Hence the chasm between Lisp crowd and Haskell mob, with JavaScript being the perpetual battleground somewhere in between.
Ext JS is the framework in question. The last time I worked on it (Oct 2018), it was about that size shared between two major versions: Classic toolkit and Modern toolkit, including tests and examples. It could be more than that now but I haven't touched it since.
I used to do a lot of things on it, including _very_ deep refactorings with sweeping changes across the codebase. The secret sauce is focus on automated testing: when I left the company, we had ~70,000 test specs for the framework, and the test suite was executed on each commit to every PR pushed to Github. Depending on framework version (there were several in flight), and browser matrix (15+ supported browsers), each test run yielded 500,000-700,000 spec results (and finished in under 20 minutes).
There was a lot of custom tooling around this, of course. Including a test runner that I got open sourced right before leaving the company: https://github.com/sencha/jazzman. Sadly it looks like there were no new commits since I left; I hoped to pick it up later but so far I haven't encountered the need to run significantly sized test suites in massively parallel fashion. Nobody writes that many tests. :)
A framework is usually the foundational code for an app, for sure. In this case it was a bona fide JavaScript framework, Ext JS.
It is that big because it implements a lot of things that other frameworks don't even try to try thinking about, things mostly used to build very boring line of business applications (think thousands of forms, huge grids, reports with charts, etc). Applications built with Ext JS can easily dwarf the framework itself; the biggest I've seen personally was ~30 mb of minified ES5 JavaScript.
Those would be the assembly-language-coding type haters. They can't make those mistakes, or they are borked.
I make the mistakes that are "fixed" by static typing quite regularly, but since I execute the code that I write, they don't survive.
Where static typing helps is the refactoring. You change some code; what other things have to change, where?
Although informing the process of refactoring, static typing, at the same time: (A) puts up some barriers against refactoring and (B) creates the need for a lot of the refactoring in the first place and (C) requires a more complete refactoring job than is minimally necessary to validate the idea behind the refactoring.
Refactored code doesn't escape the need for testing test. If you have some statically typed code that isn't tested well, and think you can refactor it just by making a few changes and then adjusting elsewhere to shut up the compiler, you have another thing coming.
I refactor code with dynamic types all of the time with no problems -- but the code I work on has a lot of tests (or I'll add them), and I definitely never refactor code that lacks tests, whether it has dynamic or static types.
It's almost as if different people have different working styles that make different aspects of a language useful.
I think the more nuanced point folks tend to make is that they catch the errors at Run Time instead of Compile Time, and that the cost of strict typing is greater than that of catching it at Run Time.
Personally, I'm agnostic (I make mistakes in all phases and fix most of them eventually ;) )
I agree that people make that point, and I agree that it's a good one. But it's irrelevant to this sub-thread, which started with me attempting to convince smt88 that they were misrepresenting the opinion of the people they were laughing at.
Really look at any significant benchmark or what types of projects use which languages. The evidence is extremely clear and makes sense due to how computers work.
The factors affecting program speed are numerous, and I would guess that types do not have nearly the greatest effect.
Most of us are not optimizing for program speed. Machine time is cheap. Human time is expensive. The biggest cost-savers for most programmers will be the ergonomics of a language, not the optimizations available to the compiler.
That was certainly true in 1990. The use of JITs has changed that to a large extent, as frequently used code paths can be rewritten at runtime in many cases.
Static types tend to push people towards copy-pasta code, as well, which can even work against performance.
That's the first time I heard that any language that doesn't have types, is slower than a program with types. I'm unsure if that actually makes sense.
You mean slower as in performance? I'm having a hard time understanding how types == performance, so you have to mean something else.
I guess assembly would be the language you would use if you really need to squeeze out the maximum amount of performance of a single CPU, and as far as I know, it does not have types.
Also languages built on top of LLVM could get by with trading compile times for run time performance. Maybe you meant compiling gets slower without types?
Strong typing means that at runtime dynamic behaviors don't need to be accounted for.
For a compiled language, the compiler could know that `foo` and `bar` are 32-bit integers, and thus compile `foo + bar` to call the addition function for 32-bit integers. In an interpreted language, the interpreter could do the same thing.
Without that typing information, both would have to invoke a generic addition function that detects the types of its arguments at runtime, notices they're both 32-bit integers, and delegates to the corresponding addition function.
Of course there are tricks that can be played in the weakly-typed case, like assuming the same call site will always have the same types, so that there can be a short path that assumes that and only branches rarely. That way the first or first few executions might be slow, but eventually they get almost as fast as the statically-typed case. JS engines in particular and JITted runtimes in general usually do this.
>I guess assembly would be the language you would use if you really need to squeeze out the maximum amount of performance of a single CPU, and as far as I know, it does not have types.
Assembly absolutely does have types. The addition function for 32-bit ints only operates on 32-bit ints, and is distinct from the addition function for 64-bit ints.
Dynamically typed languages still have types internally, they just look like this [0] (imagine it all implemented in assembly, without C's types, if you prefer).
Types are information about the code which enables compiler optimizations, so statically typed languages generally can be faster than dynamic languages. They aren't, always, of course, and other factors come into play.
Yes, absolutely a language without types is slower. Performance nowadays is almost always due to memory access patterns and allocations. Dynamic languages not only do not often have enough information to properly allocate memory in an efficient layout but they also tend to allocate a lot overall. Assembly is generated by languages that are compiled natively or on the fly for JIT languages. The type info is used to generate more efficient assembly.
Yea, the structural typing nature of TS drives me nuts. One thing I like in TS is the ability to reference individual "parts of a type", for example Account["email"] instead of "string" in function signatures. Is there something like that in C#?
TS can generate such good and natural looking JS that, faced with someone insisting on receiving a JS codebase, I’d consider keeping a shadow-repo in TypeScript and handing them its output.
Doesn’t work so well with an existing codebase, but still.
A shockingly large number of people tell you that types don't help in the real world. If those people believed this article, I'm sure they'd find it surprising.
Same. The only JS I have to write anymore is for some customization on top of a CRM platform we use. For that I've decided to go with Bridge.NET. They've build .NET libraries based on typings so it has support for almost everything.
The resulting JS is FAT, don't get me wrong, but since this is for internal use and it's heavily cached, I don't mind having our users download a 3MB JS file.
I can't imagine not using class-transformer[0] or class-validator[1] in any TypeScript project that deals with external/third-party APIs, remote or not.
I completely agree with the more general point, that consuming external data requires a validation layer. But oh boy do I have feelings about class-validator.
Here's what it looks like to correctly annotate an array of objects with with class-validator and json-schema annotations:
It's not just that you have to define everything in triplicate, it's that the failure mode for forgetting any of the above is to silently not validate your data. Unless you're very careful, you don't get the safety benefits that were the whole point of using class-validator in the first place.
If I were starting from scratch, I would instead consider either io-ts or a solution that involves a code generation step, where this entire category of risk is avoided.
I don't understand the point of transforming things to class instances, though. All of TypeScript's strengths are available without the need for things to be an actual `instanceof` something, right? Can you elaborate?
FWIW, I'm biased against `class-transformer` because we were wondering why some of our (TypeScript-driven) API endpoints were so slow, and `classToPlain` + `plainToClass` were the culprits, comprising 2/3 of the time spent. If you look at the source code for those functions, they're kind of insane.
Agreed that actually doing the transformations is a bit of a mug's game. However, TypeScript's type system treats class definitions interestingly, particularly around metadata. You can attach validation metadata to a class and pass plain objects around that structurally match that class so long as you don't define methods or a constructor.
As mentioned in my sibling comment to yours, I do this to define DTOs that are then expressed in JSON Schema and passed through ajv, and it's pretty slick. The objects being used are all just JavaScript objects, the class is just being used as a metadata holder and something you can reference/get via `design:type`/`design:paramtype`/`design:returntype`.
I can, because I use ajv and JSON Schema instead because I can trivially just take those models and use them in OAS3 documents. The web framework I've built on top of Fastify--and will be open-sourcing soonish--uses pretty simple model and property annotations to let you define your systems in those terms, and I find it to be really flexible and pleasant to work with.
EDIT: Also, as mentioned elsewhere in this thread, io-ts is a pretty good option too!
The title is misleading because TypeScript does not implicitly do any kind of runtime type checking on data which was sent by remote clients - If the client and server both happen to be written in TypeScript by the same team, the TypeScript type system can give those developers false confidence that the API endpoints on their server enforces runtime type validation on remote user input (I've seen this too many times), but it does not.
To implement API endpoints correctly in TypeScript, you're supposed to assume that the arguments sent by the remote client are of 'unknown' type and then you need to do some explicit schema validation followed by explicit type casting. Some TypeScript libraries can make this easier but it's misleading to say that this is a native feature of TypeScript; in fact, it is no different from doing explicit schema validation with JavaScript (there are also libraries to do this). You should always validate remote user input regardless of what programming language you use.
Merely getting the compile-time static type checker to shut up because the types in your client code match the types in your server code is not good enough unfortunately - In fact, it may conceal real issues by giving developers false confidence that runtime type validation is happening when in fact it is not. A hacker could write a client in a different language and intentionally send incorrect input to crash your server unless your server explicitly validates the schema.
The reality is that there is no guaranteed type continuity/consistency between the client and the server. Any tool which gives the illusion that there is any kind of continuity is deceptive by design.
This is why I like plain JavaScript; it requires real discipline and it doesn't give any sense of false confidence. Developers should always be on their toes. The only way to improve code quality and security is by exercising more caution, not using more tooling.
The benefit pointed out by the author of this article is in fact one of the few genuine gaps in TypeScript's type safety capabilities. Praising TypeScript for this fictitious feature is only going to give developers false confidence that TS somehow takes care of input validation for them and this is going to lead them to getting hacked.
Our system uses io-ts to dynamically validate all incoming and outgoing API data. The static API types are guaranteed to match the io-ts codecs, so the runtime validation will match the static types.
Re: "it's misleading to say that this is a feature of TypeScript": I didn't say that. TypeScript makes this kind of static verification possible. It's impossible in JavaScript.
Re: "The benefit pointed out by the author of this article is in fact one of the few genuine gaps in TypeScript's capabilities": yes, it's a gap in TS. Again, TS makes it possible (as opposed to JS). io-ts (mentioned explicitly by name in the article) backfills the type erasure shortcoming for our purposes, at the expense of being more verbose and producing worse error messages when compared to first-class reflection in a non-type-erasing language.
Re: "going to lead to them getting hacked": no, io-ts isn't going to lead to that more than any other runtime validation scheme. If someone believes that TS' static types provide runtime guarantees, yes, they could write highly insecure API code. But it's hard for me to imagine someone getting to an experience level where they can write a server-side router that's generic over all possible API endpoint payloads, while at the same time not knowing that TS erases types at runtime. Erasure comes up early in the process of learning TS because it leads to surprising behavior, like objects at runtime having properties that aren't present in their static type.
It's definitely possible (just as easy in fact). You just need to define a schema for each API endpoint. TypeScript does not physically protect you from having to enumerate all the properties and size constraints of the data one by one. In terms of the actual schema validation step, there are lots of tools in JavaScript which let you do the same thing; ajv, z-schema are just a couple of examples.
My point is that TS adds no value there. TS's value is only in static type checking. What the article is claiming is that TS somehow adds value with runtime type checking.
The argument of saying that "TypeScript adds value because it can reuse its internal type definitions for the purpose of validating remote input" is circular - It refers to a problem which doesn't exist in JavaScript to begin with.
JavaScript has no type definitions so being unable to reuse type definitions for schema validation does not qualify as a drawback on its part. The argument simply does not apply and cannot be used to claim TS's superiority.
At best you could claim that TypeScript cancels out one of its own shortcomings:
- TS shortcoming: You need to define types everywhere... That adds a lot of work! (-1 point)
- TS benefit: But you can re-use these type definitions for doing schema validation of user input as well! That saves a lot of work! (+1 point)
But the net gain over JS is 0 because:
- JS shortcoming: You need to define a schema for all your endpoints to validate user input... That adds a lot of work! (-1 point)
- JS benefit: But aside from that, you don't need to define types anywhere... So that saves you a lot of work. (+1 point)
The only real argument to be had is whether or not compile-time static typing adds value over dynamic typing.
My point is that static type checking of APIs does not solve any new problem which static type checking in general does not already solve - That's why I don't agree that it solves API woes specifically. My argument is that it adds no value in that area.
If a developer already does input validation correctly with JavaScript, then switching to TypeScript will not add any value for them in that area.
Furthermore, my first argument (about TS giving false confidence) is that TypeScript does not make it any more likely that an unskilled developer would be able to identify and solve the problem of schema validation when compared to JavaScript... From the unskilled developer's point of view, false confidence (which TypeScript can sometimes provide) is worse than having no confidence at all (which is what JavaScript always provides).
Type checking is 100% about confidence; it gives more confidence. Confidence and correctness are two completely different things and I wanted to point out that there is such a thing as false confidence.
I rename the "email" key in our register endpoint to "emailAddress" and all code that touches that key turns red less than 1s later, whether it's in the client or server.
Edit: All of these edits to your posts after I've already replied are very confusing.
What you're describing applies to static type checking in general. I don't argue this point. Sorry about the edits. It's very difficult to express such things unambiguously.
I'm more concerned about the effects of the post's title on people's programming practices than the actual content of it. HN does tend to have this effect unfortunately.
Also, I should point out that in terms of achieving correctness, it's possible to get similar value as what you're describing simply by having good tests.
I don't want to get into this argument now though because it could last forever but hopefully it highlights why it's important to keep arguments tightly bounded and not make blanket statements when the domain is so large and complex and we can go off on a tangent in an infinite number of directions.
Ultimately, I enjoy JavaScript and I don't want future employers to force me to use TypeScript because of articles like yours (with those kinds of titles)... I was already forced to use TypeScript at my last company, it went well but it would have been even better if founders had let me and my team use JavaScript... For one, I might not have quit the company. I was tired of debugging mangled JavaScript (compiled from TS) using vim over SSH whenever there was a problem (it was a large decentralized P2P project so often that was the only way to debug it). The drawbacks of TS were definitely not worth its benefits for that specific project.
If you want to play with a powerful type system check out Raku. It offers gradual types that are a hybrid of dynamic and statically checked.
Basically, you can define a `subset` of a type with a validation predicate. Subsets can be named or anonymous and inline in function signatures.
Another thing that made me think of Raku was your discussion of NULLs. Any type container can be defined or undefined, an undefined Array container `isa` Array. But you can specify whether your annotation means to be specifically only defined or undefined members of the type. So `Int:D` is a defined integer, whereas `Int:U` is undefined, and `Int` is either.
The type specifications combined with multiple dispatch cam make for pretty code.
It's a really interesting language and, in my opinion, worth checking out. It would be great to hear your thoughts on it.
This video convinced me even further that dynamic typing is superior. The speaker basically admitted that static type checking still requires unit tests because types in statically typed languages have too broad/imprecise granularity. I was also thinking about issues related to timing, race conditions and incorrect state mutations; static typing doesn't prevent any of these. IMO, static typing doesn't even begin to address a tiny fraction of all the possible programming mistakes that one might make. IMO its added utility value is often so low that it's basically not worth the mere hassle of having to come up with type definitions.
Personally, when I write a function definition, I always try to visualize the set of possible values that the function will need to handle - So given that I already have that precise set of possible arguments already in my mind as I write the function, it doesn't add any value for me to then formally generalize my function parameters as being integers or strings or some other types which are too broad to effectively constrain my function input to the required level. With static typed languages, merely specifying that a variable is an integer doesn't save me from having to think about what specific subset of integers I mean. For example, if my function only works with odd numbers as input, using the integer type definition will not offer me any additional safety.
I would argue that most functions are like this. The type definitions are never granular enough to guarantee correctness. The type system only protects you from the most basic/obvious mistakes - So obvious that you don't even need the type checker to tell you.
> The speaker basically admitted that static type checking still requires unit tests because statically typed languages have limited granularity in the typing.
Static types only cover one set of bugs, but it is an extremely rewarding set. The payoff of static typing, and what TypeScript focuses on, is early feedback (did I get the name right, did I forget to convert some arguments, what can this value do?). If you focus on the "it eliminates some tests" aspect, you are completely missing the point.
> Personally, when I write a function definition, I always try to visualize the range of possible values that the function will need to handle - So given that I already have that specific range already in my mind as I write the function, it doesn't add any value...
So what the function will handle remain in your head? Or you found a better way to document them than type annotations? Do you write code that anyone else has to read now, or even you have to read later?
> The type definitions are never granular enough to guarantee correctness.
Coupling your front end and back end in this way can give you false confidence in making API changes. If you control both sides of the API on a web app that doesn't have too many users yet, then this can be very productive, but consider this situation:
You've made a breaking API change on the back end, and thanks to your type system, you make the corresponding front end changes as well, which is great. You deploy your code, and the back end changes take effect immediately. Visitors who were in the middle of a session are still running the old front end code, and their requests start failing. Cached copies of the front end start failing too.
You can engineer around this but it's better to have a system that doesn't make introducing breaking API changes too easy. It should be painful.