The author talks about Immutable.js, but doesn't mention Records, which have been a real godsend in my application. I use Immutable.Record with types [1] (using Flow), and I feel very confident with this setup. You are forced to set default values for every key, and the code will crash if you try to set a key that hasn't been defined.
It's really great to have typed Records (often nested), and I expose the getters as regular properties so that I don't have to call "get()" all the time. I agree that it's not perfect, but it's miles ahead of anything I've worked with in the past (at least in terms of JavaScript.)
JSON schemas are awesome, and I like the idea of validating types at runtime on the front-end. I already have a JSON schema that defines my API (I use Swagger), so I could even go one step further and auto-generate my typed Records directly from the schema. I'll get immediate feedback about any type errors in my JS codebase as soon as I make changes to the API. I generate my Swagger API specification automatically from some RSpec tests (using rswag [2]), and I also run Flow during CI, so this would result in a broken build. I could also make this work with the data I'm inlining in the HTML to hydrate the Redux store.
Random aside: I also have a one set of data that just gets dumped into a jsonb database column, so the schema for this specific data is only being defined by the front-end code. In my backend I'm always careful to provide a default value, but it's not really critical data. Anyway, I just realized that I should be defining this jsonb column as an actual model, but instead of ActiveRecord, I could use ActiveModel and a plain Ruby class. That's a neat idea. It would give me a schema and validations on the backend, while still having the flexibility and performance of storing everything in a single json column.
Try elm. The elm abstraction is probably the only one where you literally will never ever touch javascript or raw html.
It is the least leaky abstraction I have ever encountered, which is amazing given the fact that it is also blazingly faster than every other framework from React to Vue. Elm is a way higher level abstraction then jsx or Vue yet you don't need to understand the lower level details to produce more efficient applications.
The most leaky abstraction I have ever encountered is probably SQL. Optimizing SQL involves understanding what it compiles into; Elm there is no such understanding required.
This is the opposite of my experience with Elm. Tried it for a while. Getting the time or a random number takes a tutorial to learn how to do because those things aren't purely functional.
Then I wanted a fast lookup table, turns out all dictionary lookups in Elm are nlogn because it's backed by an immutable tree structure.
So, I disagree with you and went back to JavaScript.
Although I disagree with you, I never stated one was better then the other. All I said was that elm was the least leaky abstraction I've ever encountered. Your frustration with IO proves my point. There is a practical logic around why it was done that way, but the practicality won't be apparent until you create a huge 10 million LOC monolithic SPA with elm and compare it to the same type of app written in javascript.
> All I said was that elm was the least leaky abstraction I've ever encountered. Your frustration with IO proves my point.
Doesn't that prove my point, that there's an impedance mismatch of Elm trying to present a purely-functional world on top of a very side-effect-full environment of the browser+DOM etc.?
> but the practicality won't be apparent until you create a huge 10 million LOC monolithic SPA with elm
It sounds like you're arguing that for apps smaller than "absurdly large", plain JavaScript can be better.
Calling bs on anyone having anything approaching 10 million LOC in Elm. You're just making things up.
Edit: for reference, the Linux kernel has about 20 million LOC (according to Wikipedia), so you're talking about half a Linux kernel written in Elm. Absurd.
>Doesn't that prove my point, that there's an impedance mismatch of Elm trying to present a purely-functional world on top of a very side-effect-full environment of the browser+DOM etc.?
Your reply doesn't even address my point. That's all I'm saying. All I said is that I disagree with you.
>It sounds like you're arguing that for apps smaller than "absurdly large", plain JavaScript can be better.
>Calling bs on anyone having anything approaching 10 million LOC in Elm. You're just making things up.
>Edit: for reference, the Linux kernel has about 20 million LOC (according to Wikipedia), so you're talking about half a Linux kernel written in Elm. Absurd.
Don't ever call what I say absurd or BS that's plain offensive and against the principles here on HN. If you want me to continue replying to you, I'll need an apology otherwise I will ignore you.
The comparison is quite unfair. Elm manages a system of 1k nodes accessed by one user while sql manages a system of 1m records accessed by 1k users (i am counting browsers) soo... yeah, leaky. You never ever have to optimize anything in sql when you have tables with 1k rows. It’s the scale of things that forces you to optimize.
elm is a high level abstraction that's faster then simpler lower level abstractions like jsx or vue. That's why it's a really good abstraction.
SQL is a leaky abstraction not because of scale. Scale is a requirement and the chosen abstraction for scale is actually not the wisest choice. SQL is an expression based language used to execute algorithms like binary search that are inherently imperative. Thus it is a bad abstraction and the scale of the requirements magnifies this. The need for the EXPLAIN ANALYZE key words exemplifies this flaw. A good abstraction for databases would be a language that explicitly declares the algorithm used for the query. This would make users able to explicitly optimize the query without needing to cross the barriers of abstraction. Like googles V8 implementation of javascript, ubiquity and tons of resources invested into creating an incredibly efficient implementation of an interpreter have made SQL and javascript the best choice for their respective fields even though they wouldn't have been the best design choice if people were given the choice to start over.
Rust is a better design then C++ but again ubiquity, early adoption and random forces have made languages with questionable design choices the dominant paradigm.
The JS interop is basically a leak in the abstraction. It in itself is a flaw and I'm claiming that elm is such a good abstraction that you don't need to access or touch the holes in the abstraction.
Does BuckleScript do any runtime type checking? If not, you have the same problem the moment you call out into any pure JS library - your type annotations say what you expect, but JS code can merrily give you something else.
I have to admit, as much as I enjoy working with TypeScript— this same problem is always on my mind. I find myself continually writing the same tests for type safety that I would in JS. I have to remind myself that, yes the types are safe while writing, but still not quite at runtime. It can lead to pretty fundamental flaws if you’re not careful.
It wouldn't solve that, but it would prevent the code from running with data that is incorrect according to type annotations. Think of it as a runtime assert that is automatically generated from the compile-time type annotation.
(Unfortunately, the JS runtime object model does not allow for such runtime type checks to be performed with reasonable performance.)
OCaml as a language is so close to a sweet spot between performance and ergonomics in my experience. I'm not sure if the Bucklescript compiler is generally running on Node or natively, but that could also be a big variable at play here.
Probably the reason is that reasonable Flow types for Records aren't in the mainline 3.0 release, and 4.0 (with good static typing support) has been in RC hell for about a year and a half.
> in various cases the types are wrong and the compiler doesn’t care
It's disturbing how often I encounter this in TypeScript. Or its inverse: the types are correct and the compiler is wrong. Or a third common problem: the types for a library are incorrect. The unsoundness of TypeScript is not merely theoretical. The compiler is frequently just wrong. For that reason, I am mystified by the amount of enthusiasm for TypeScript that I encounter online.
The most important thing, if you haven't already, is to set the noImplicitAny and strictNullChecks compiler flags. I find that not having these flags set is a common source of Typescript type unreliability.
The flags prevent two behaviors that are a major source of type failures: defaulting some types to "any", and allowing null/undefined to be valid values for ANY type (!). Enabling these flags for an existing codebase is likely to uncover hundreds of type errors that are papered over with the default compiler settings.
W.r.t. upstream types for libraries being wrong, I find the silver lining here is that libraries will usually be happy to accept patches for their types, often with significantly less fuss than a patch to the actual code.
edit: Even with those in place, I also notice that Typescript types are not 100% trustworthy. This often comes down to unsanitized input, use of the "as" keyword, or hidden type violations. The conclusion I've come to is that it's just very hard to type Javascript code, partly because the values themselves are so prone to being mutated. So I agree that Typescript doesn't feel like the perfect solution to me. (I usually use Clojurescript these days).
To be clear: I'm not complaining about these third party types. I really appreciate that people took the time to write and share these. My gratefulness is definitely not contingent on perfection. I fully expect human work to be imperfect. Type systems exist to guard against this imperfection. A type system in which type annotations are maintained separately from the implementations, and are routinely debugged is highly suspect.
Speaking for myself, the key selling point for Typescript (and Flow) is its pragmatism. While it might not have the soundness of a Bucklescript or Purescript, it does what its alternatives have chiefly failed to do for me: Allow me to see immediate incremental benefits without having to refactor an entire codebase.
While the alternatives might be in some ways a better start a greenfield project in, the ability to gradually add types to an existing codebase with developers who are just learning the language with the ability make the system sounder over time is a killer feature for me.
Also, the big-company backing helps corporate slaves like me trying to sell a new technology to management.
You make a good point. I don't mean that TypeScript yields no benefit. But if I can't trust the compiler 99.99% of the time, then I find that I'm double-checking my work _and_ the compiler's work. It's not clear to me that the benefit outweighs this uncertainty. When you say that your system is "sounder", you don't really know whether that's true if the type system is unreliable.
Weekly, sometimes daily, I run into a situation where flow is straight up wrong in baffling ways, or needlessly developer-hostile, and I end up having to help my junior team members write worse (or at least more verbose) code to compensate.
It's still a million times better than JS without TS/flow.
100% agree with this. If you're using redux the boilerplate of Typescript is huge. To do that all that work and still have the compiler miss problems at the state level deemed Typescript pointless to me.
I've found myself preferring type guards for checking action type, since it avoids you having to have keep one big `Actions` union around that takes X amount of lines and really doesn't serve any other purpose.
Is Redux a TS native library? Angular has typing issues of its own, e.g. Forms are typed with 'any', no easy access to certain HTMLElements' properties, etc., but the rest is okay. (Meaning hard to do bad things without early warning/failure during compilation.)
This was a deliberate design choice by Anders. It is quite difficult to make a sound type system for JS that has nice interoperability and ultimately adoption was more important.
I would not use typescript without the "strict" compiler option. It enforces lots of things that should have been default and covers a lot of the issues you mentioned.
On a brand-new project, I agree wholeheartedly. To do otherwise is to miss half the benefit, in my opinion.
However, those of us that are in love with TypeScript often want to refactor existing JS into TS, where allowing implicit `any` and allowing `null` make it a bit easier to convert larger codebases.
It's too bad the author chose specious and self-invalidating arguments for using Purescript and Elm as opposed to pointing out real issues with Flow and TS, isn't it?
That's usually a sign that you need to define your own types for the objects in question. I had to do this for immutable.js records, because the default types on its methods are basically lies.
Typescript brings much needed order to the complete (seemingly) typeless nature of Javascript. That is the source of enthusiasm. TS is the direct result of a development team working out the kinks of having to maintain a large JS codebase.
With regard to the third, this is a problem specifica with libraries written in JavaScript. Hopefully it'll become less common as more libraries are written in TypeScript and the types are automatically generated from the code.
Passing props through a higher-order component using JSX's spread operator instead of the attribute syntax, e.g. `{...{...props, foo, }}` and accessing that prop in the wrapped component. I found it was possible with the attribute syntax. More examples can be found in the issues for `@types/ramda`. In some cases, the types are incorrect. In other cases, TypeScript's type inference is too weak.
From your description, that sounds like the compiler accepting something that isn't correct. I'm more interested in the case(s?) mentioned where the types are correct and the compiler won't accept the code.
I will have a quick skim through the @types/ramda issues, thanks.
I have been using Flow (mostly) and TS (occasionally, including counter-checking problems in Flow to see what TS does in a similar case). Overall I would not go back to the time without a type checker, however, I too am mystified by the enthusiasm. Anyone who wants to check the state of static type checking for Javascript should take some time and read through
I do that quite a lot myself and have also contributed a tiny bit to both projects (no core code, things like definitions, small doc improvements, quite a bit of answering to issues and often checking the posted code for myself, often in both Flow and TS).
Ignore the issues posted by people who really would just need a forum to ask questions, there are plenty of real issues left. Worst part: Many of them won't be solved (too hard, too much work, too many issues overall). You have to change your coding style and write in a way that the type checker can actually help you with. Also, it's easily possible to end up with types that are far more complicated than the code they are supposed to describe.
I spent more time working on the types than on the actual code. What makes it worth it is that one, I get some control over the code other people write using my library, if I insist they too use the type system I can prevent them from misusing the API to a degree. Two, those other people also includes myself in future incarnations. Three, refactoring can be significantly easier, if you have good types the checker will tell you all the places you missed changing.
So overall, at least for my situation, mostly for writing a library with few external dependencies (so I don't need the more or less unreliable external type definitions) that the business heavily relies on in many products, adding the very considerable additional effort is worth it. Still, I very much disagree with all the enthusiasm.
The type checkers are software trying to understand software, and that software that it's attempting to check is not just an already complex dynamic language, but in addition on top of it is people's code that comes in a million styles. Those type checkers can be valuable, but they are far (very far) from perfect, and they come at a considerable price (mostly in the time it takes to create and maintain the types, I think the additional step to remove type annotations for production code is pretty insignificant in comparison).
> I spent more time working on the types than on the actual code.
That's interesting. I very strongly don't, when writing TypeScript (but I don't then also mess around with a second type system on top of it). Better Intellisense and fail-up-front checking means I write code much, much faster.
I didn't have to change my coding style because this is already how I wrote code; I now have the tools to actually do it well.
TypeScript is not a "complex dynamic" language unless you step outside its bounds. Which, sure, sometimes you need to; it's not perfect, and one of its advantages is being able to opt out when absolutely necessary. Then you fence off that type-unsafe code by strongly testing before you hand something back into TypeScript because you're a competent programmer who understands the limitations of your tools. But that happens so very, very rarely that optimizing for that corner case seems foolish.
How much time you spend depends on your data structures. If the easy default options are good enough you don't need to sweat. We make heavy use of disjoint unions, and while the simple case here too is easy, some things that we do - although they still are very simple on the Javascript side - are hard on the type checker (Flow in this case).
> TypeScript is not a "complex dynamic" language
I don't know what you read into my comment, but if you just stick to what I wrote, Javascript certainly is, and TypeScript is just Javascript (the type annotations are a separate thing). I'm not sure what your point is overall I have to admit, it's a bit on the defensive side for no reason that I can see. For whatever reason you seem to feel personally attacked ("I didn't have to change my coding style")? I refer back to what I wrote, point for point. Would it please be allowed to write down my observations? Especially when it is base don years of practice in all the relevant technologies (JS, TS, Flow) and I'm not just making stuff up without having data (i.e. actual experience). Plenty of other people wrote similar comments here.
I don't feel defensive, but I do find your comments interesting and your experience almost diametrically opposed to mine--which is what I said.
JavaScript is quite complex and quite dynamic, but the reason TypeScript got me back into doing web stuff was, by and large, because it removes that except for in clearly delineated places (at least, once you turn on strict mode).
This article is just a recapitulation of the old pure functional programming vs. imperative programming argument but wrapped up in new packaging. Most of these issues, such as needing to validate types at codebase boundaries, functions that have side effects, and unsound static type systems, could just as easily be leveled at most imperative languages (C++, Java).
So yeah, go use a proper functional language (many of which compile to JS!). Complaining that an imperative language isn't a soundly-typed functional language is tautologically true I guess, but who cares?
It's not really, though. Because even pure FP folks read this article and say, "This is just nonsense and ill-considered propaganda."
For example, the idea that you can't trust Typescript because it might call untyped code but you CAN trust Purescript is just wrong. It's not even just wrong, it's utterly misrepresentative of what actually happens in Elm and Purescript. It's misinformation.
Not OP, but I'd guess the idea is that calling untyped code in TS is basically Pushing the Big Red Button, or Launching the Nukes™. When doing that in TS, you do it knowing it's your own responsibility to test and establish trust in the code. And Elm and PureScript are the same. In other words, Elm and PureScript won't save you from screwing it up the way you can with TS.
All pure/statically typed languages have escape hatches. It has been leveraged as a counter-argument to using them. But but but, they will say, Haskell has unsafePerformIO!!! Well, yeah, sure it does. You can subvert the borrow checker of Rust, too.
But you have to do it explicitly. You can grep your code base for "unsafe" (literally). You can lint for that. You can code review for that. You can document it. And that's what makes it OKish. In a way, types are like big fat oven mitts, and you're the lead programmer of a bakery, where you pop buns in the oven all day. You can discard the oven mitts for a time, though, and do whatever delicate work you need your actual fingers for. Just don't touch the hot stuff, lest you get yourself burned!
Companies and coders will always push themselves to the absolute brink, where they no longer trust the code and the system is nearly unmaintainable.
Because every step towards that line has not just cash value, but exponential cash value if it’s a startup.
And by constantly pushing themselves into unfamiliar territory coders develop the most impressive possible resumes. They either have relevant experience or have experience in a tech even beyond what the company they are interviewing is using. That’s your best position to get hired.
It’s easy to write JavaScript you can trust. Heck, you can write a trustworthy web server in Visual Basic. Trustworthy code just isn’t worth much to 99% of the players involved.
Sure, if you're going to dump your project in 6 months like a hot potato. Any coder that takes pride in their work knows that constantly piling on more tech debt to push thing out faster only slows down your project in the end. The only exception I can think to this is very early start ups, where if you don't get a product out in 6 months, then the project won't have a future to think about.
I think most companies do do this, on a rolling basis with different parts of the product.
Except by “dump every six months” it’s more like “after six months don’t add much to that anymore, unless we can dedicate a good three weeks to hacking it into place. And then dump the whole component in three years”.
Since your customer base and business requirements are likely to turn completely over in 3 years this makes some (perverse) sense for a startup.
This seems cynical and wrong. I've never written a line of code for the purpose of "resume padding." Trustworthy code is absolutely useful to a development team.
But writing that code is HARD. Training a growing team to write that code is hard. Banging out new features, on the other hand, is (at first) easy and seems more fun.
I don't think exact technologies per se are that important for "resume padding". Many companies do use relatively "old" technologies and don't care as long as the job gets done. Slack even uses PHP. Not to mention the big companies which might pay more attention to algorithm problems instead of the particular language you solve those problems in. Do the recruiters really think you are the one if your resume has all the newest hottest things written all over it?
Personally I learn new technologies if they are fun to learn and make my life as a dev easier. I'm not sure if anybody jumps on wagons just for the sake of it. For example I tried out Rust, Julia and Elixir, hated the first but liked the later two. I also tried Vue a bit but with Phoenix framework a lot of things can just be done with server-side rendering so I just dropped Vue in the project and felt totally fine with it.
I worked on the system which is now the customer admin for Square’s ecommerce platform they bought in Weebly.
We were all learning Vuex, while also trying to ship code. Across a half dozen pages we had a half dozen slightly different ways of addressing Vuex data. It wasn’t so much complexity that a professional coder couldn’t keep it straight in their head. But it was a trivial amount of complexity to fix. Two days work, maybe a week. Mostly variable naming.
But we chose to ship that feature because we wanted to prove to the company we could deliver. Now those coders will live with that choice for a log* time. New pages will use another slightly ad hoc scheme.
Eventually some architect will decide something like “Vuex is too conventional, we’ve got too much glue code, should we try switching to Redux or Vuedux?”
And the correct answer for the devs is: yes, it’s good for your resumes.
And (speculation) the engineering management will probably never swallow “we have to do three weeks of variable renaming and function moving to fix this now”. If they couldn’t swallow two days of cleanup in exchange for meeting an arbitrary internal target, they won’t swallow three weeks which probably pushes back a user-visible release.
* I meant to type long, but log time, pun intended
That sounds like no one actually designed or architect-ed the system, and everyone just went off and built things without an overarching plan. This is sadly somewhat common in software projects. This could be due to a variety of factors but to chalk every software project up to "resume padding" seems like a simplistic take.
Using your example but applied to other companies, I've seen the below, or some combination of all:
* Unskilled devs that have never used any of the new technologies and spent little time prepping beforehand.
* Management who demanded unrealistic deadlines or demanded devs skip design phase in favor of shipping faster.
* Lack of technical leadership, senior engineers are hard to come by at many large orgs and are often overworked or too busy to be everywhere at once.
* Cut throat career minded individuals that always jump ship to the next new thing.
In your case, it seems insane that you would be writing software with multiple teams or even multiple developers without some overarching plan. It shows a clear lack of technical leadership on the part of management and senior engineers.
I don't even think "jumping on the next new thing" is a particularly valid "career strategy". Many companies do use relatively "old" technologies and don't care as long as the job gets done. Slack even uses PHP. Not to mention the big companies which might pay more attention to algorithm problems instead of the particular language you solve those problems in. Do the recruiters really think you are the one if your resume has all the newest hottest things written all over it?
Personally I learn new technologies if they are fun to learn and make my life as a dev easier. That's it. I'm not sure if anybody jumps on wagons just for the sake of it. For example I tried out Rust, Julia and Elixir, hated the first but liked the later two. I also tried Vue a bit but with Phoenix framework a lot of things can just be done with server-side rendering so I just dropped Vue in the project and felt totally fine with it.
It's a bit hard to follow what the author wants to tell us, so I might be misrepresenting below (sorry!). I don't think the assertions that (I think) they make hold though, or are particularly useful.
The author claims "functional programming, types, JavaScript: pick two". He supports that with links to a number of libraries that have incomplete type definitions, and a few patterns that are hard to express in a static type system.
I don't think that assertion holds. Some JavaScript APIs are indeed written in a style that's very hard to give static types for, and some libraries have sub-par type definitions.
But odd JavaScript APIs != the entirety of functional programming, and sub-par type definitions are just missing features that need to be fixed. You can write perfectly fine, statically typed, functional programming idioms within the TypeScript type system (and presumably within Flow). You can even do that within the confines of Java's type system, and that's a lot weaker than TS' or Flow's.
These type systems are optional, and you can subvert them. If you use the recommended strictness flags though, it's hard to do so accidentally.
> In a dynamic language like JavaScript, it can be hard to know what the shape of your data is.
You don't need static typing to program defensively. Static typing instead ensures a certain defensive position, but as a developer you can choose to program in such a way even in a dynamic language.
That being said it is foolish to make a blanket assumption that data is doomed to confusion. The state and shape of data is determined by the authoring application and reinforced by the receiving application. If data is shaped improperly then a defect is present in the system, so fix the defect.
If a dose of common sense is still not enough... then program in TypeScript.
It's nothing to do with "a dose of common sense" so much as it is the reality of dealing with humans. Humans make mistakes. Static typing is automated checks to make sure you didn't. When doctors start using checklists their patients do better [1]. Static typing is checklists for programmers.
In theory weak dynamic typing works great. In practice humans suck. Hence checklists.
This completely ignores my comment in its entirety. Do people make mistakes: yes. Does software ever have defects: yes. Typically, people making mistakes in software is called a software defect.
It is absolutely critical in a discussion like this to understand that defects in software are not necessarily defects in data. This conversation is about data, and checks for processing data are already present in various forms. For example when was the last time you complained about weak type systems when working with JSON, XML, or SQL tables?
> For example when was the last time you complained about weak type systems when working with JSON, XML, or SQL tables?
Well, I don't know about the person you're replying to, but I certainly complain about these all the time.
Interestingly and counter to your point, I think, XML grew a "type system" of sorts with XML Schema (which is widely used IME, esp. where interfacing with external services as in SOA), and SQL actually has a reasonably strong type system for most implementations albeit dynamically checked at query planning time. (SQLite seems to be the outlier here.)
JSON is actually the outlier here, but even with JSON there's been attempts at a sort-of type system with JSON Schema. Why? Well, because beyond a certain size the "just chuck some data somewhere in there" ceases to be workable when you want stable and maintainable interfaces between software components.
Ultimately type systems are invented and used for almost entirely pragmatic reasons.
> ... XML grew a "type system" of sorts with XML Schema (which is widely used IME, esp. where interfacing with external services as in SOA) ...
The type system came from both XML Schema and XPath and was codified with XSLT 2, then expanded when XQuery was defined. It's not been a type system "of sorts" since XSLT 2, which filled in a lot of gaps and undefined behaviours.
I didn't really ignore your comment either though.
>> You don't need static typing to program defensively. Static typing instead ensures a certain defensive position, but as a developer you can choose to program in such a way even in a dynamic language.
Correct, but in a statically typed language defensive programming is not required on your part to the same extent (because the compiler enforces a whole set of defenses on your behalf). You'll also forget sometimes, mistype sometimes, check the wrong things sometimes, forget to update some defenses. The compiler won't.
>> That being said it is foolish to make a blanket assumption that data is doomed to confusion. The state and shape of data is determined by the authoring application and reinforced by the receiving application. If data is shaped improperly then a defect is present in the system, so fix the defect.
The type system enforces you checked the data (by for instance, providing a conversion function `fn convert(input: String) -> Result<StructuredData, Error>` which then forces you to handle your error there, and then you don't need to check anything else later.
>> If a dose of common sense is still not enough... then program in TypeScript.
That's half a solution because unless your entire ecosystem is typescript those types are at best aspirational. An improvement to be sure, but still aspirational.
Which, IMO, can all be summed up by "we have automated checks so humans don't have to write them and make mistakes doing so as they go". Outsource the checks to an automated system.
Data in itself can be typed. For JSON objects the equivalent type is usually called a Record. Within a single strictly typed program, data is transferred between methods and functions and the shape of this data would be described by a type.
There is not much of a difference, theoretically speaking, when data is passed between functions inside an app, or between apps via some network api. What you describe is actually a real issue in typing. Types are lost across systems. JSON is not strictly typed, neither is XML or SQL. So if I have a typed application that spits out data to these formats I lose type checking. If you think about it, even jumping across abstractions from one typed language (say rust on the backend) to another typed language (say typescript on the frontend) you will lose type safety due to incompatibilities between languages.
I don't know if there's research into or thought going into solving this issue. But a universal type language/syntax that can be translated into respective type systems across api abstractions could lend a lot to safety of entire ecosystems. There is no type safety across http services and designing a system to overcome this issue is very possible and to my knowledge not something people have thought about yet. New idea?
Imagine a language/framework that compiles into web apps and microservices on some cloud provider. The language has syntax that allows me to decorate functions to specify which microservice it runs on. Data transfer between microservices can now be placed in some binary format or maybe the language has syntax that allows me to specify human readability. Either way the language abstracts away the devops and webapp layer and combines it into a single source code... this would allow type checking of the shape of data to not happen just within an app, but across services.
Is there such a framework/language that does this? I'm positive it can be done. I would totally love to work on an idea like this. The microservice fad talked up modularity between services but in practice people realized that microservices are too complicated. Perhaps an entire language framework that combines the web app layer and dev ops layer and adds type checking across services would solve this issue.
SQL has a schema, and the way I use it, I generate interfaces with strict types from my schema, specifically because I've run into trouble not doing so.
XML, well, there's the schema language for that, but that's fairly "outdated" and I haven't touched it in ages, but if you've ever looked at an HTML parser, it's borderline sentient because of weak typing.
JSON? That's a huge source of issues. I've run into them professionally and personally, and they cause outages all the time. Thats why protobufs are great (schema) and GraphQL with generated interface is great.
You try to parse the received data into a predefined data type, if this fails you are forced by the type system to handle the failure scenario, if it parses correctly you now have a 100% valid data structure you can work with in the rest of your codebase, no defensive checks necessary.
Imagine some language with the following type and functions:
type Person { name: String }
function parsePerson(json: String): Person { ... }
function usePerson(person: Person) { ... }
The compiler will guarantee that usePerson will always get the correct shape of data for it's person variable. The only function you need to worry about is parsePerson, which will generally throw a runtime error if the incoming JSON does not conform to the expected shape. And in most web frameworks, the JSON to type parsing is done for you so once the framework hands you data in the type you are asking for you can expect it to be correct.
Static types can and usually are checked at compile time.
Whether the language runtime also uses type checks while running depends entirely on the language and the runtime.
IIRC, the JVM (to take an example) more or less throws away a lot of the static type information -- which is often too generic to be useful in guiding optimization -- in favor of runtime profiling, and several of its key optimizations involve inserting runtime type checks.
In most modern languages with static types, types are reified - i.e. actual values have some runtime type associated with them, and it's impossible (or, at least, exceedingly difficult) to create a value that has the wrong representation for its type.
In practice, with parsing, this usually means that runtime type metadata is used. Effectively, types become your schema, and you can validate against them as needed.
You write code at the entry point that might fail. Perhaps it returns either structured data or an error. From that point on, you can work only with structured data and the type-system will help you. You might be able to generate a parser from a schema.
The tool or approach does not have to solve every single related case ever in order to be useful. So yes, in case of two different applications, it might still end in runtime error.
Then again, if both ends use schema, then comparing schemas is easy and gives you fair chance to fingers the place that differ between systems.
If there is one place or lib as a source of schema on both ends, then mismatch is just a result of different version being imported by them - something easy to check.
Well, it's like saying static typing doesn't help you because both statically- and dynamically-typed systems need to do arithmetic. It doesn't really make sense, and nobody is saying static-typing is helpful because it frees you from the shackles of a + b.
The difference is that in a statically-typed system, you're validating data at your boundaries such that they comply with your nominal types that are then compiler-guaranteed to be internally consistent.
If you think common sense can ensure safety then you are super human. I have a 10 million line program that outputs an extremely complicated dataset with hundreds of parameters. Occasionally one parameter disappears... at a rate of 1/1000 runs. This is a bug, but the bug bypassed everything. The system has 100 e2e tests of "defense" but because the bug happens 1 out of 1000 times, 100 integration tests did not have enough coverage to find the error... so the program made it to production...
Now with static type checking. That program wouldn't even compile. The compiler can prove that the program is wrong rather than you going through hundreds of unit tests to try to catch an error while proving nothing. Proving a program is wrong is better than writing billions of unit tests that cover the entire domain of a function.
Your strategy of using common sense to go through 10 million (exaggeration of course) lines of code is an effective strategy, but not an intelligent one.
I love TypeScript and use it in any new project I create. The author is right though, you need to have team discipline to avoid using the `any` type when it can be avoided, and immutability is not in scope.
TypeScript is probably the closest we'll get to my personal ideal of static typing in JS, while understanding that you sometimes need an escape hatch to deal with dynamic typing as JS is a dynamic language.
I think that the focus now should be allowing other more modern languages in the browser by means of Web Assembly. Rust is a great candidate for dealing with immutability and unsoundness concerns. I can't wait for wasm to be fleshed out more.
It does beg the question though, would web development be as ubiquitous and as popular if the core language in was statically typed? Does the freedom JS offer you allow greater adoption overall, meaning that it's beneficial to loosen the type restriction as the ecosystem will flourish because of it?
You don’t need to wait for web-assembly to solve these problems.
TA mentions 3 languages (Elm, Reason & Purescript) that combine sound type-checking and immutability right now.
None of these bring the cognitive overhead of Rust’s borrow checker and, in the case of Reason at least, compile at lightning speed to readable (though optimised) JS.
But isn't that true for every software project? You need discipline to avoid most of the garbage that can be created by using any language or framework.
I've seen some of the worst java apps before, but no one says "never use java". It's generally accepted that skilled engineers can make clean, easily maintainable apps, and unskilled engineers make garbage apps.
This just sounds like sloppy engineering and a total lack of professional rigor.
I used to believe that the language didn't have any gauging on the quality of code, but I personally feel that coding in rust has made me a better and cleaner programmer, and I trust my team's rust code more than I do its java code.
you can also enable things like "strict", then you need less team discipline, it will be enforced by the compiler. People can still use 'any' explicitly (but not implicitly), however
Static types are great because they make certain classes of bug impossible to write, but they have their limits. There are escape hatches, like casts and Object types, even in statically-typed languages, though fewer of them in languages with more advanced type systems. There are reasons to like PureScript over TypeScript, just as there are reasons to like Haskell over Java. TypeScript doesn’t give you a way to ban side effects, for example, being part of the imperative family of languages that it is. So if you are super into banning side effects you might like a language that does.
The thing to realize is that at the end of the day, all you are doing is reducing bugs, and while your type system (IMO) really can help reduce bugs, it will never catch all bugs. There is still value in a function declaring that its argument is of type User (and not defensively checking it) even if it is theoretically possible, in the presence of a bug, though unlikely, that the argument is not really a User. The function has said what its precondition is, and (assuming you are validating any data that comes over the network or from an untyped source), this precondition will in practice be checked 99% of the time. If a bug happens to line up with a hole in the type system, you’ll just have to debug and fix it.
> Static types are great because they make certain classes of bug impossible to write, but they have their limits. There are escape hatches, like casts and Object types, even in statically-typed languages
Even those don't have to be escape hatches as such, in the same way e.g. List.head doesn't have to be partial, that was an explicit decision.
When casting from rust's Any trait to a concrete type, you get an option. There's no subversion of the type system, there is no runtime error unless you as the developer decide to generate one.
All good points, but I'm having a hard time with this statement:
"There is still value in a function declaring that its argument is of type User (and not defensively checking it) even if it is theoretically possible, in the presence of a bug, though unlikely, that the argument is not really a User."
I'm trying to think of a statically-typed language that would allow such a thing, and I can't, so I assume that you're referring to TypeScript here where you have a world where things are "partially-typed" ?
In TypeScript you could also pass `any` as argument to such function and the compiler would be happy. Of course the noImplicitAny flag helps here, but sometimes you could get `any` from a library with poorly written types as well.
This produces a similar error ("Type 5 is not assignable to type 'string'") when compiled with TS:
var s: string = "Dog";
s = 5;
The difference is that it's a compile-time error, while in PowerShell it's a run-time check. But what's the difference in this case? Either way, it tells you.
One of the strengths of JS is how the runtime environment can be dynamically interrogated and polyfilled, and how it can often gracefully degrade if there's a mismatch between static assumptions and runtime behavior.
Is there a vision for how PureScript/ReasonML/Elm etc. can accomplish the same thing?
The brute force approach is to place a boundary between everything external and handle it via an FFI. This adds friction: apps built under this approach are less capable and evolve less gracefully. I hope there's a better solution.
Nim is another language with a JS backend and it is also kinda good at this: it has a pretty good type system, but it's easy to use it gradually for JS code.
You have a `JsObject` type (which is basically `any`) which lets you write your Nim->JS code in a dynamic way and cast to static types whenever you need to interface with static parts of your program. You can still also define the type signatures of a JS API as well and have a fully type safe experience, but you can do it lazily: just for the functions/types you need.
Basically you can use a JS api in any way you want: from almost fully dynamically to gradually more and more type safe way
Javascript is maybe the language made or suitable for simple app,
If you will make a big complex app, There is a simple way to fix this: use more strong languages Compile to Javascript like Elm/PureScript/BuckleScript/ScalaJs/Nim/ClojureScript(with core.specs) and so on[1].
Yes, that's exactly what the conclusion to the article says.
As someone who has (almost entirely) given up writing plain JS, my skimming of this article is "2500 words about issues with JS mutability, then: just use ClojureScript".
Additionally, you don't even need to use language implementations that compile to JavaScript. WebAssembly is progressing and is quite arguably ready for production use.
I really like Rust personally and think it's a good option for web applications, but there's many options that are becoming available.
Aside from that I'm surprised the author didn't mention checking constructors. It's arguably the most dependable way of checking if something is of a certain type.
But I agree that it's a shame that TS/Flow lose their ability to check types whenever partial application or currying is involved - it's not like it's impossible in principle.
I worked on a 150k LOC pure JS codebase for almost two years and my experience is different - if tests aren't enough then they're either not written well enough or there isn't enough of them.
Think of all the time you wasted writing unnecessary tests that the compiler could perform automatically on your behalf by you properly specifying your types. You'd be much more productive and confident, and have less LOC to maintain. Refactoring would be much easier too.
No type system would measurably help in this case since we necessarily had to deal with browser behaviour which wasn't exactly standardised and tended to change from version to version.
Well don't lose sight of the goal. The goal is less bugs. Static typing is just for internal quality. Users don't give a rat's ass if your types are correct as long as the program works. Proof: Many wildly and massively successful programs are duck-typed.
Nobody ever cared to use tests to replicate what a type system does. That's a straw man. Tests go straight for the actual goal by testing actual values. Correct values are what are necessary. Types on the other hand are neither necessary nor sufficient.
Frankly, I don't even know what to do with such a statement. What is your definition of "internal" ?
I just did a very large re-factor of a codebase a few days ago where I changed a lot of event handler (method) types in a codebase of around 200K lines of code (Object Pascal). This means that the method signature of every single event handler could have been wrong in some way if I made a mistake, and it would have introduced a pretty egregious, non-internal bug in every case. How would you test such a massive change without replicating every single thing that a static type system and compiler already do for you ?
This has been my experience too. I've done a bunch of these sorts of changes over the years in C++. It's very straightforward: break code, fix errors, run code, fix remaining bugs (if there are any), commit code.
None of these projects had unit tests, but that's fine - you don't really need them for this sort of modification, just some patience, a tolerance for boredom (that, or the ability to suck it up anyway), and a complete lack of concern about messages such as 'ERROR: 0/1,577 file(s) built. NOTE: Error limit exceeded - only showing the first 99,999'.
Well let me first remind you that you're in a javascript thread. OO Pascal is strict enough that you're not going to get the opportunity to see things work when they get types that they didn't expect because the compiler is unforgiving. So that's going to be hard for you to imagine, but it happens in javascriptland. A function in javascript meant to take a number to output "X hours ago" could very easily get the string "3" and still do exactly what the user expects. That's what I mean by an incorrect type that is internal only, and not a user-facing bug. This is an overly simplified example too. With duck-typing you can often get away with all kinds of not-the-type-that-the-method-expected and live to tell the tale.
I haven't touched OO Pascal in 15 years, so I can't speak to how you would write tests for it, or if there's even a testing framework. I will tell you that I left the if-it-compiles-it-works attitude around that time too though.
Yes, I understand how JS works. My company markets and sells a web development IDE that includes an Object Pascal->JS compiler.
Re: your definition of "internal": I suspected that this was what you meant. I also disagree with this particular version of "correctness". If the developer intended that a particular function accept an integer, then accepting anything else is essentially a bug, and the fact that it works at all is down to pure coincidence.
As for writing tests in OP: you write tests in the same manner that you do with Java, C++, C#, etc.
Internal quality is external quality haha. If your types are wrong your program is wrong, and if it just happens to work now wait until more people use it, or you refactor it, or you change the way it works. Your users are basically just fuzzing machines haha.
Your statement that "nobody ever cared to use tests to replicate what the type system does" is trivially false, have you ever tested what happens when you pass "null" into a function that probably shouldn't take null? If so, you just did. Some type systems allow you to even specify valid subranges of types, and combined with algebraic data types, it's effectively impossible to misuse.
That's not necessarily true though is it? If the "1" in the "1 hour ago" message in the header of your reply wasn't the expected type of the writer of the output function but ultimately still shows up as "1 hour ago" to the user, there's no user-facing defect. I can promise you that this happens all the time in javascript. You're in a thread about Javascript.
> have you ever tested what happens when you pass "null" into a function
Not that I recall. Javascript will throw an appropriate error when it detects that your duck isn't quacking correctly. No one writes validation logic everywhere on internal units, so there's no functionality to verify with a test.
With that said, null is decidedly not "just a type". It's also a value. I generally avoid using null everywhere as much as possible too, because it's a pretty terrible concept.
> Some type systems allow you to even specify valid subranges of types, and combined with algebraic data types, it's effectively impossible to misuse.
Sure but you're in a thread about javascript and typescript. And let's be honest: the vast majority of type systems do not support that.
TS/Flow don't quite lose types "whenever" partial application or currying is involved, it's a bit more subtle than that and in many cases it works fine. But especially when using generics with currying you often have to rewrite in more verbose ways (or structure the functions in a particular way) to make the types check through.
Many of these concerns seem to not be limited to Javascript at all - especially once you're sending/receiving data between systems. In Python for example, when receiving JSON/XML payloads you have the same defensive parsing layer at the point of entry, but humans can still mess that up and monkey patch a class that breaks assumptions elsewhere without warning. Did I miss something in the larger argument?
The larger argument is that you can't have the "language level trust" until you push __everything__ trough the same comppiler/transpiler/validation pipeline and can trust that everything has gone trough it and it has not been modified by other means.
Using Javascript as a platform where you mix code generated trough different conventions and tools is a halfway solution. Javascript treated as the unmodifiable binary from the compiler creates same level of trust as using other languages.
Technically:
* In C++ somebody can use different header for the class and break the class abstraction.
* In Haskell some library might go trough the FFI into the runtime and assign values to data in non-functional way and break the functional abstraction.
* In Python and Java there are ways to access data in ways that circumvent the compiler or the language semantics.
The difference is how strong the convention against doing so is and does breaking the abstraction have any benefits.
No, that is exactly correct. The brunt of the argument applies to most popular languages- Java, Python, Ruby, and what have you- although the specifics vary.
After watching the value of values by rich hickey I've become cynical towards complex types, I think my ideal would be maps and scalar types and maybe something like clojure specs for more complex checks
I think shortcuts like the following are fundamentally flawed, at least when dealing with user input.
If (user && user.name) { ...
While it sucks to have to even check for user etc, not checking that it’s truthy, then an object, then checking that the name is the correct type also during runtime, will cause bugs over time.
This type of developer provided shorthand validation is pretty much the cause of all hacks/data breaches currently. It’s more of an attitude or how a developer prioritizes their own wants/needs over maybe a client/end user. Like I mentioned in another comment, checking for more advanced, abusive behavior will make issues with type checking seem basic in comparison.
Will be nice when JS gets the Elvis operator and you’ll be able to do ‘if (typeof user?.name === “string”)’ which will have the added value of type narrowing/interaction with ‘never’ if name is expected to be something else.
I'd say the elvis operator would make things worse, especially wrt the parent post.
The problem isn't that `user && user.username` style validation is verbose, but that it's very error-prone. Making it less verbose just means you'll more likely use it before you reach for a better validation strategy, like a declarative validation library.
But you still have to validate and parse unstructured data in Haskell as well. And what would be the point of having the type checking done at run time? That's what many dynamic programming languages already do. If you're curious about Flow or TS then you must already have a problem with that approach.
Type checkers are tools that help us reason about, structure, and correct code. That's all.
Flow is decent these days but will probably never be as powerful as Haskell. My only gripe is that it forces my team to write verbose code at times in order to gain the benefits: you lose point-free style, gain a bunch of identity and instance checks... but it's a small trade off to make.
One small trick worth remembering and using in Flow to gain something similar to exhaustive type checking is to cast to the empty type in the default case when pattern matching on a type[0].
It's not an elegant type system but if you have experienced a good type system (like Haskell's) then it does make using Flow/TS much easier.
> In a dynamic language like JavaScript, it can be hard to know what the shape of your data is.
Being dynamic doesn't preclude variables from being type checked by the runtime. Reading this just sounds like "javascript has no built in support for type checking, so lets pretend it's not possible"
All the time people spent making weird unreadable 'fat functions', but you still can't define the types of arguments you want to receive?
> Being dynamic doesn't preclude variables from being type checked by the runtime.
It kinda does, because in a dynamically typed (as colloquially defined - if you want to debate that definition, that's another discussion) language, variables don't have types - values do.
Type checked against what, if a variable doesn't have a type?
There are contexts in which two values are involves, where we can at least check that the types are matching (or reasonably compatible), like (a + b). JS is a massive failure in that regard, too, and there are dynamic languages that do it far better - e.g. in Python, ("1" + 2) is a runtime type error, not 3.
But in the context of this thread, I feel that's not what we're talking about. We're really talking about contracts - and contracts require typed bindings (variables, function arguments etc), not just typed values. Which means that they do require static type annotations. As we've discussed in another subthread, those annotations might be checked dynamically - but they still have to be declared statically by the coder, which makes the type system effectively static from their perspective.
To put it differently: if you require types to be declared explicitly, there's no reason to not check them statically. Dynamic checking of static types really only makes sense for gradual typing systems, where code with static type annotations might be called from some non-annotated code, and you want it to be possible without forcing static typing on the caller. At that point, yes, you can have a dynamic type check in the callee, which ensures that if code without type annotations doesn't actually get the types right, things fail fast at the static/dynamic typing boundary, instead of allowing it to propagate into the statically typed code (as is the case in e.g. TS today).
> if you require types to be declared explicitly, there's no reason to not check them statically.
I think some of this thread is suffering from confusing terms so, my entire point is this:
There is no native way in javascript to say that the first argument of function foo must be an instance of class Bar (which in JS could be a string or an array or some custom object or even a DOM element - everything is an object and thus has a 'class').
The common response is "use typescript" which means you write the code with argument type declarations, and then compile to javascript. This (in theory) means any code you've written in the project in TS which calls our foo() function, will warn/error if the first argument isn't an instance of class Bar.
But, and this is the problem: it's not enforced by javascript, it's "enforced" (for whatever definition of enforced typescript uses) by the TS compiler.
So if I then use the compiled JS of function foo() and call it from native JS (i.e. not compiled by TS) - it won't tell me I have the wrong type.
Yes, I understand that. My point was that it's not really dynamic typing at that point anymore. It's kinda like C#, where you do have "dynamic", but there are static type annotations that code utilizing "dynamic" can validate against. At that point, I'd say that it's no longer a dynamic language in conventional sense.
Getting back to your issue - it's a pragmatic decision for TS, because enforcing its type system efficiently is virtually impossible on top of the JS runtime type system, when you consider how it has to work. It sounds like simple, obvious check for trivial cases, like if I annotate a function argument to be an number or a string. But what if it's an interface? At that point, the runtime check must ensure that 1) it's an object, 2) it has all the members listed in the interface, and 2) they have the corresponding types. If they are themselves interfaces, this becomes recursive. If they're arrays or maps, then this has to be done for every element. An array of arrays is O(N^2) already!
To check it efficiently, you need to have proper runtime type tags - an array has to know that it's an array of number, not just an array. Then the runtime checker can just query that, like it does in Java when you, say, downcast Object to int[]. But TS has to work on top of JS runtime, so any such runtime type tagging would manifest as visible artifacts in JS; and TS is intentionally designed to be "invisible" from JS interop perspective.
If we want quality strong typing that works properly with cross-language interop for the web, we need to throw away the JS runtimes entirely (it would be a good idea in any case - it's an atrocious object and memory model for anything other than JS itself), and replace them with something saner.
This may not be a popular opinion for those coming from C or Java, but runtime type checks in JavaScript are usually an anti-pattern. If my function defines a contract that the caller violates by passing in null or undefined, then my function should raise an exception when it tries to access `undefined.xyz` or whatever. Otherwise, embrace the duck! If the function can work with what it was given, it should instead of trying to enforce some Java-esque type system at runtime using typeof or instanceof.
But be pragmatic. If there is some critical area of your app that absolutely must never ever throw, typecheck away (although this is JS after all, so it still may throw if it damn well pleases).
Documentation and/or a type checker. Jsdocs are both documentation for programmers and can be enforced using tooling like the closure compiler or typescript
JavaScript is no different from most other dynamic languages this way
I’m sorry that I commented on this article. Knew it was a trap. I’ve been writing functional-style for a long time and just wanted to share a bit of what I’ve learned.
Use whatever tools work for you, but also don’t disparage or dismiss other tools because they have different strengths and weaknesses than what you know or appreciate. There is no need for proselytizing; these are just tools after all!
I like JavaScript and TypeScript because they have let me quickly build many great things and share them with people. I also subjectivity find writing in them to be enjoyable. I may always turn to a different language if it is more productive and enjoyable than js/ts for the task at hand.
If the runtime can guarantee your function receives an instance of a particular class, that's a lot of boiler plate you don't need to write yourself to assert the same thing.
Typechecking at the runtime level means that you have to write tests that traverse every single code path to make sure your code works the way you expect. You can just do this at compile time with static typing. Saves you a lot of boiler plate tests you don't need to write yourself to assert the same thing.
This is where terminology fails us. I really wish the terms "static typing" and "dynamic typing" as presently used would be gone entirely.
What you're saying is that explicit type annotations can be enforced at compile time. However, they can also be enforced at runtime, and they're no less safe for that - you just see the errors later, but any call that would be blocked by a static type checker can similarly be blocked by a dynamic checker. I think that's what OP refers to by "runtime checking", not writing explicit asserts.
A similar example that web-focused developers are likely familiar with, is type hinting in PHP. For the developer's purpose, there is no 'compile' step (yes the code is compiled and then run, but it happens at request time so its not realistic to compare it to compile time type checking in a compile language like Java).
The 'compile/run' lifetime (in browser based JS) bears similarity to that of PHP: compile time effectively is run time, because it's compiled during the request and then executed immediately.
In the context of JS 'compile time' refers to.. TS?
My point was that if your TS project is used in a non-TS context (i.e. the compiled JS is called from regular JS code, not compiled TS code, you no longer have that compile step.
I also think you've misunderstood what I mean. I'm talking about native language level support for typed function arguments at the javascript level, not relying on TS to provide that.
This article is bizarre. Firstly, the author states the problem that data can propagate and suffer loss/mutation over time. He then cites types (which absolutely do solve this problem), but then dismisses them saying:
> Adding types to our example doesn’t solve the underlying problem. It improves trust within the code base by helping to ensure that data is used consistently, but it says nothing about data received from the outside world.
But anyone who's used type-based programming knows this is nonsense if you simply write a typed parser (of particular interest to gradually-typed Javascript is packrat-style parsers). Then, your code needs to specify what it wants and what its policy is if it doesn't get it.
But if the author was imagining that compile time integrity and correctness checks could EVER perform runtime checks... well... they're not even wrong they're misunderstanding the function so badly.
The author then proposes out of left-field:
> But it’s like plugging holes in a leaky ship. The problem isn’t just that you can’t trust the types in your system, but that you think you can. You rely on the types to tell you when a change breaks something, but because they were quietly disabled by an any type, or by use of a library, or by a soundness issue, it doesn’t. Types in JavaScript are different from types in most other languages people use: They can’t be trusted in the same way.
Which is both true and useless. Yes, it's true that type circumvention methods exist, and you never really can know if you're calling into a library that circumvents your type constraints unless you read the code. But yes, that's useless because nearly every typed language also has this mis/feature, including OCaml, Haskell, C++17 and even Rust! So the author is not actually giving a real recommendation or even novel insight.
And then we get the final substantive giving us arguments that the author has themselves just discredited:
> Or you change the game and just use PureScript. Or ReasonML, or Elm, or even ClojureScript. These exist today. Production software runs on them. They work with the JavaScript ecosystem, where necessary. And they provide a higher base level of trust in the code that you write and an environment where immutability, functional programming, and types (where applicable) work well and work together.
Firstly, I've written a fair amount of Purescrpt and we call into unsafe code ALL THE TIME for the sake of expediency in Purescript. To even pretend otherwise is just... either it's willful lying or someone who has barely interacted with the Purescript ecosystem in any production fashion.
If you use a dependency and don't vet it somewhat (or at least trust a crowd to vet it as a minimal effort) then you're in for a bad time. Pretending otherwise, or that this is a unique flaw of Javascript, is just wrong.
And if you disagree with that, then I have a challenge for you. I propose to you that you can substitute ANY language and runtime's type and GC as "types" and "assembly" as underlying javascript and not change any underlying axiom presented in this article.
It's not the types that fail, but the type system and its misuse.
> This is like pretending really hard
No, that's not pretending — static typing is literally about theorem proving, type theory being equivalent with math logic. This is the famous Curry-Howard isomorphism, types corresponding to propositions.
Of course, you can have situations in which your type system cannot describe the propositions that you need, or situations in which holes are left for pragmatism. Such holes include for example the ability to use Object and then downcast in Java and have the implicit contract that the people using them know what they are doing, being at the same time a symptom of the type system not being expressive enough.
In TypeScript in particular you can just use the "any" type, which is there for pragmatism and because JavaScript developers would have had a hard time adopting it otherwise. It doesn't help that TypeScript's generics are unsound either.
Of course, you can point at "the outside world" and say that you can't trust it to deliver the types that you expect. But with a good type system, you only need to do that runtime validation only once, at the edge where you receive the data and afterwards the statically typed compiler can take over that responsibility. And indeed, in my experience libraries for JSON parsing in Scala (e.g. Circe) or Haskell (e.g. Aeson) are very well behaved and due to automatic deriving, they make that validation painless in 90% of cases, because the runtime validation is derived from the static types automatically, no work required, something I haven't seen in TypeScript.
The whole notion of "optional types" is my opinion flawed.
You either work with static typing, or with dynamic typing and choosing a side will change the way you work, it will change your mentality, your approach to problem solving.
Working with optional types, which is what people tend to do in languages like TypeScript, brings you the very worst of both worlds.
Optional static typing doesn't work. A contracts system, like Clojure's Spec, works much better for dynamic typing and yes, there are fundamental differences, basically the difference between "for all" and "exists", or between compile time and runtime.
---
> Convention: Pretend immutability
N.B. in an actual FP language, immutability is not a convention, but something being enforced by how data structures are built (e.g. persistent data structures), via the type system or the runtime system — even in Java, you'll have a really hard time to change the definition of a class or of a "final", or to mutate a persistent collection (see vavr.io).
---
I like TypeScript overall, I think it's an improvement over JavaScript and it certainly has pretty cool features — but don't use it to judge either static typing or dynamic typing or functional programming for that matter.
If you want to see actual dynamic typing and how it can shine, use ClojureScript. If you want to see actual static typing and how it can kick ass, use PureScript.
And both are good for giving you a real taste of functional programming. JavaScript and JavaScript++ languages, are really poor at FP, mostly due to the ecosystem's culture, and the very few FP libraries that exist for JavaScript are very unpopular, with only a few gems here and there, like Rx.JS. So if you're doing JavaScript or TypeScript, chances are your codebase doesn't do much FP.
Did you read the whole thing? This is exactly the point the author is trying to make. Look, you both arrive at the same conclusion even:
> Or you change the game and just use PureScript. Or ReasonML, or Elm, or even ClojureScript.
> If you want to see actual dynamic typing and how it can shine, use ClojureScript. If you want to see actual static typing and how it can kick ass, use PureScript.
But the conclusion doesn't jive with the comments he was responding to. I found the article a bit annoying because of that, though otherwise interesting.
> Such holes include for example the ability to use Object and then downcast in Java and have the implicit contract that the people using them know what they are doing,
It's worth noting that this is still not quite the same as the leeway that most dynamic languages allow you, because, while you can downcast, all such downcasts are checked. You can't just say that Foo is a Bar, and proceed to treat it as a Bar, in Java and similar languages. You can assert (via a typecast) that it ought to be a Bar - but if it's not, you just get an exception or some other indication of failure.
As a result, it's much, much more difficult to hammer square pegs into round holes in those languages. As it should be.
I instead proclaim that it brings you the best of both worlds. Finding the right mix of static/dymamic is a balance act but if you do so you get the benefits of static types (safety, documentation, tooling) and dynamic types (fast prototyping, not having to write convuluted code to please the compiler). What are the killer features that gradual types lose out on?
Sure,and lots of codebases are completely untyped. I find it great that you can opt out of the theorem proving that type checking is when crunch time comes. As such it functions as a loan and your organization should strive to pay back the debt when it can. Nothing can save you if you never get a calmer period.
I think it’s the worst of both worlds, you still have to deal with types (often manually even), but you can’t trust your types since they’re not pervasive.
The killer feature is global type inference. I pretty much prototype in Elm as fast (or faster) than any dynamic language, and when the code compiles it’s free of runtime error except at the boundaries (i.e JS interop). This allows me to focus on fixing error arising from my misundetstanding of the problem (and my general incompentence).
I agree that defensive checks may clutter code, but in the current landscape of constant security issues, to not use a battery of defensive checks whenever dealing with input is simply naive, no matter the IDE, compile language or runtime language.
To help with readability but allowing for complex input checks, I typically abstract the input validation to another function, called near the top.
I think as time goes on type checking will be the least of our issues. Checking for different types of abuse (technical, social, semantic), context sensitive boundaries, setting tripwire-esque traps to control abuse etc will make any current concerns over type checking seem elementary.
We need to give up on the idea of coding 3 line, beautiful functions that look great on Twitter and realize we live in a time where defense and privacy are a priority, while (some how) simultaneously pushing innovation.
I hate it, trust me, but if you talk to typcial corporate/business clients, this is where their thoughts/concerns are currently. I realize it’s a bigger conversation than what I’m mentioning here.
This article explores a lot of things bundled together. One morsel to take away is the bit about inbound data validation.
Type systems needs to work statically as well as at runtime via some kind of reflection API. This way you can unmarshal the data that comes over the wire directly into your language's type system. With JavaScript+TypeScript/Flow, the static type system and the dynamic reflection system is split into two separate paradigms, which in my opinion makes things rather complicated to the point of sacrificing developer productivity.
I'm not really sure what the author is getting at. You need discipline to avoid most of the garbage that can be created by using any language or framework.
I've seen some of the worst java apps before, but no one says "never use java, it's bad".
It's generally accepted that skilled engineers can make clean, easily maintainable apps, and unskilled engineers make garbage apps.
This just sounds like sloppy engineering and a total lack of professional rigor and in that case, why is javascript to blame?
This is why I love async code, at every step I get either an error or a value. Which lets me deal with the error gracefully. I don't think error handling is "clutter". Not trusting code is an under-statement. You should be paranoid!
Not sure what this author is saying here. Types have nothing to do with being able to trust other systems and everything to do with the internal formal consistency of the code that you write.
It's about being consistent with yourself.
If you find yourself writing defensive checks in fulfilled promise handlers because the data you get is sometimes invalid, stop. Get up from your keyboard, go to whoever owns the endpoint, and ask them why their endpoint is returning a 200 OK response with data that doesn't live up to its spec. This is a bug in their system, not yours.
If your software crashes on bad user input, that's an error in your software.
By the way, there is distinction to make between library code that should exception/assert/abort on error and application code that should verify input and display an error to the user.
I've worked for very large companies whose massive codebases are littered with defensive checks that have piled up over many years. Invariably, I find that this is because, when the initial wave of defensive checks inevitably broke down because they guard against something that itself changes behavior again, more defensive checks were added.
All of this could have been avoided if those companies had just created a culture, from the beginning, where you force the upstream data provider to fix its issues rather than coding around them by allowing code to fail-fast rather than gracefully.
Now companies are beginning to introduce integration and unit testing across the board and type systems (again, to ensure internal consistency for systems so that other systems can depend on them) and the cruft and defensiveness is slowly going away.
To repeat the old cliché, the best defense is a good offense :-)
>had just created a culture, from the beginning...
Were it such an easy thing...
Here's how I've experienced such things:
I write naive code. Application hard fails. Test environment down - everyone cross with me.
Me: Yo upstream - fix your data please.
...no response.
So I raise a ticket to get data source fixed - send it to the backlog.
Me: Yo delivery - can we get this ticket prioritised, assigned and put into a sprint?
Delivery: Yeah... nah - cause y'know... delivery.
Me: Yo leadership - can we get a decision taken on whether or not upstream should fix their data, or whether we should just program defensively? If the former - can someone tell delivery to tell upstream?
...no response.
At this point - I give up. Write my little defensive check and admonish myself to do so consistently from now on. Commit, push... go home - sleep like babe.
Often the systems you're connecting to aren't maintained by people in the same office or company as you.
Every back end and front end system I've worked on as long as I can remember has had dependencies on third party API's. Getting bugs fixed in those APIs could sometimes take months ("change requests"), if at all.
Finding a good balance for how your system gracefully degrades is a bit of an art form.
Don't think it's unrealistic at all from within a single company.
The point about 3rd party libraries is taken but I also think if you are using a 3rd-party endpoint that publishes a specification that it then doesn't live up to (and takes months to fix) then it's not a reliable partner in whatever you're doing and is not living up to its own promises about the data its customers can expect. Like one of the other commenters here said, time to deprecate or write a wrapper around it that enforces the data contract.
I agree with this. I’m all for writing checks against values which may be optional, but I don’t litter my logic with checks against code that should provide the appropriate data.
If you’re getting bad data you need to address the issue higher up. Either through fixing your source or if you’re forced into some badly behaving library then you should at most write a wrapper around it and make that as isolated as possible. Then you should start making plans to deprecate that library.
The spec should be typed. Those other users should not just be giving you typed data, they should be giving you a type signature.
I'm referring to a general situation. You're referring to a specific situation of http apis sharing what I assume is untyped data and the typing is lost during transport. The solution is to develop protocols on top of http where the type signature isn't lost.
But the gRPC purist who wrote that endpoint while mandating that unit testing private methods is philosophically worse than genocide and using a slackbot to append “don’t write tests for the compiler” to every pull request got a huge payout to leave the company last year before sexual harassment complaints could move forward.
> If it doesn't get fixed, switch endpoints or switch jobs.
At work a large part of my job is writing integrations with other systems our customers have. As an example, just recently an integration failed due to malformed XML. This was an XML file our program receives from another system our customer has bought, based on their spec.
The error was that while the XML header said the encoding was UTF-8, the actual data was Windows-1252 encoded...
The other system had bi-yearly releases, so any fix was months away, assuming their developers even understood what the issue was... And getting our customer to switch system is out of the question, at least short-term. Meanwhile our customer wanted the integration working ASAP.
So, I'd rather code a check, make our customer happy and keep my otherwise quite nice job, than quit and leave our customer stranded.
> If it doesn't get fixed, switch endpoints or switch jobs.
Uhhhhh, this is not realistic? If I have to quit my job or switch services every time those services have a bug I'm going to be switching jobs every other week.
Anyways, I think the author is actually looking for something along the lines of gRPC, where your constraints are far stricter and easier to enforce. Option types don't even really make sense for what they're after from what I could tell.
Switching endpoints may not be option. It's probably not. Why would your company have two endpoints that do the same thing. If it's a third party service you could switch to a competitor but that's not that simple either.
Switching jobs? I mea yea you could but again that's not exactly an option for a lot of people. Particularly in the short term.
Mean while your app is stuck on a loading screen with an infinite spinner and bad reviews are flowing into the app store.
In many companies, there's also option C - fix it yourself. Any way you slice it, the bug is in the system that's reporting the wrong condition, and adding a defensive check merely papers over an issue with additional code complexity.
Sure - your program can degrade gracefully. But that should be for informational / diagnostic purposes only. It doesn't change the fact that you're using a system that is not living up to its data contract.
To make an analogy pivoting on contracts: modern programming is often not a "rule of law" environment. If you try to pretend that it is when it's not, you're just going to get a lot of pain. And you can't fix everything by yourself.
At some point, the most efficient way to push back against bad code is by validating the contract at the boundary. Then, when that breaks, you have a lot more leverage to go around and demand fixing things.
Or, better yet, use systems that don't allow contracts to be so easily broken. Static typing is an example of that.
Getting the problem fixed is about culture as much as anything else.
If there is some legacy reason something cannot be fixed and has to keep returning bad data, then yes, there is 100% a reason to have two endpoints that do the same thing... usually prefixed with /v2/ or something
Is having to use something like a json-like schema just admitting that the language is broken? If you can’t trust or use common practices inherent to the language to do things in a typed way if needed then what is the point of a not strongly typed language?
To be honest, just having a defined, literal type is not going far enough.
Is there a built in language strategy, in Rust for example, to define a complex numerical type that has common properties like min/max range, exclusive min/max, conditionally required logic etc?
Of course you can put some custom checks/logic to validate, but not sure why people instantly shame json-schema. From a validation perspective (not tooling) typescript is simplistic compared to a reusable, extended json schema.
It may be a matter of time for other js/ts tools to improve. I’ve been learning Rust and I haven’t seen a better solution currently for runtime/user input validation. (Not a c/rust expert by any means)
Currently, you do runtime checks. We’ve accepted an RFC for integer generics that will let you do compile-time checks instead, but we’re still working on an implementation.
First, no language is perfect, and building a real system with growing team(s) is always about making purity vs pragmatism tradeoffs with the main goal of keeping the complexity down as the system scale. The truth is that a system is much more than just the languages it is composed of, and regardless of the purity, exactitude, elegance of a language, sloppy work will always lead to a messy result.
In other words, languages should not be used as a substitute for discipline, good patterns, or scalable best practices, but rather as an enabler of such conducts.
TypeScript favored pragmatism over purity, but it is a very well thought-out, robust, and expressive typing layer on top of JavaScript.
Sure, for the sake of language purity, developers can "bytecode your way" to JavaScript Dart, Java with GWT, ReasonML, Elm, but there is a reason why all of those options, how elegant they might be, have limited traction and relatively short lifespan (at least compared to JS), and that is because they add too much friction points (often unexpected ones) for the sake of language purity, which at the end is a never-ending quest anyway.
TypeScript is a structural typing system which has its differences with commonly known nominal typing systems (which often confused with soundness), and while there are some unsoundness is some specific cases, overall, TypeScript is a very robust and expressive typing system on top of the most pervasive language, JavaScript. Not perfect, but very robust nevertheless and a huge step up from traditional JS. TypeScript combined Modern JS (module, let/const, async/await/promise, arrow function with related scoping, deconstruction, ...) does create a very robust environment to scale code and developers.
We used to all our backend in Java for 20 years (personally comes from Oracle) and we now see nodejs / typescript ecosystem and runtime environment has competitive for big app system. For one of our client, we actually just ported the last of our multi-service system from java to TypeScript, and it got 40% less code and it is better typed than Java. Java's high typing friction and Class-For-Everything purist approach makes good code design more of a contortionism exercise than an intellectual one, and while nowadays most agree that this OO-Obsessed approach was the wrong one, it was considered very pure and "robust" at a time.
Anyway, the short version is what makes a big code base scale is not the purity of any given part (e.g., language) but how the whole can minimize present and future friction points. The less friction, the more velocity. Discipline, best practices, tools, environments, and ecosystem are all part of a whole and one needs to make sure to not over optimize one part on the detriment of the others.
Broken promises in Javascript? ... unsurprising ... Here is an imaginary article that I would read: "New web platform runs entirely in webassembly generated by well-typed languages". Subtitle: "When hand-written Javascript is detected, continuous integration activates subdermal shock-collar that each of the developers is contractually obliged to have".
It fails all with the teams opinions. It's the people not the language or tools. All these discussions of developers trying to be correct. You don't need typescript, flow, functional programming, OOP, whatever. These tools have it's place particularly because teams cannot work together and need some kind of rules. In most cases individual developers differ quite a bit from each other, especially when it comes to their opinions. It's dramatic.
It's really great to have typed Records (often nested), and I expose the getters as regular properties so that I don't have to call "get()" all the time. I agree that it's not perfect, but it's miles ahead of anything I've worked with in the past (at least in terms of JavaScript.)
JSON schemas are awesome, and I like the idea of validating types at runtime on the front-end. I already have a JSON schema that defines my API (I use Swagger), so I could even go one step further and auto-generate my typed Records directly from the schema. I'll get immediate feedback about any type errors in my JS codebase as soon as I make changes to the API. I generate my Swagger API specification automatically from some RSpec tests (using rswag [2]), and I also run Flow during CI, so this would result in a broken build. I could also make this work with the data I'm inlining in the HTML to hydrate the Redux store.
Random aside: I also have a one set of data that just gets dumped into a jsonb database column, so the schema for this specific data is only being defined by the front-end code. In my backend I'm always careful to provide a default value, but it's not really critical data. Anyway, I just realized that I should be defining this jsonb column as an actual model, but instead of ActiveRecord, I could use ActiveModel and a plain Ruby class. That's a neat idea. It would give me a schema and validations on the backend, while still having the flexibility and performance of storing everything in a single json column.
[1] https://gist.github.com/glenjamin/75a96b45f4bb5c6ac221815d28...
[2] https://github.com/domaindrivendev/rswag