Relatively recently I've adopted the stance that JS (via V8+NodeJS) has the best feature set of the "scripting language" tier of interpreted programming languages, assuming ecosystems for your problem are comparable. The combination of compile time type-checking support (via Typescript), parallelism support (recently in v11 w/ worker threads), and built in async IO paradigms give Node a huge leg up over ruby, python, and perl.
Weirdly enough, JS is the best in my opinion precisely because it does so much to not be itself. The crazy iteration speed of the JS ecosystem means that people have done crazy things, but the good stuff has remained. Python relatively recently added typing[0], and Matz has mentioned that ruby has no plans to include types in the language proper (though people are working on it). Perl might be the closest in terms of features since it has thread support but...
If you enjoyed this hot take, here are a few others you might enjoy:
- "extensive" unit testing is a poor man's type checker
- developers that don't think specifying types is helpful are still early in their careers (those who suffered in the dungeons of Java get some slack)
I believe the most amazing feature of JS is the ability to make their devs think they got it all figured out.
> Python relatively recently added typing[0]
The typescript compiler and the mypy type checker both appeared in 2012.
Function annotations have also been officially part of the Python syntax since 2006, although they could be used for more than typing.
Browsers still do not support the typescript syntax as of today, you need a transpiler.
> parallelism support (recently in v11 w/ worker threads), and built in async IO paradigms give Node a huge leg up over ruby, python, and perl.
Python has supported non blocking processing as far as 21 years ago with threads. It's actually the language of choice for pentesting a networks because it's so easy to implement a custom service scan.
Python then, in 2002, saw the rise of Twisted for pure asynchronous I/O + callbacks, as it was for JS and AJAX at the time. Ansyncio came way later, yet Python implemented async/await and yield before it was part of JS.
Python implemented multiprocessing as well, which allows to leverage multiple cores, more than 10 years ago. I remember that producing the equivalent result ishard to use accross all major browsers, and nodejs, consistently. The Python multiprocessing pool has even not changed API when we migrated from Python 2 to 3 and works with both (I just tried it to be sure).
> Weirdly enough, JS is the best in my opinion precisely because it does so much to not be itself.
I think you should try to put in production 4 or 5 other languages before stating such a strong opinion. Your post feels more like over enthousiasm due to lack of experience than a real analysis of the state of dynamic programming languages.
There are strong qualities to Perl, PHP, Python and Ruby. The fact they are so popular without having an accidental absolute monopoly on the most extraordinary plateform of the world (the Web) should at least give you a hint of that.
> The typescript compiler and the mypy type checker both appeared in 2012.
> Function annotations have also been officially part of the Python syntax since 2006, although they could be used for more than typing.
> Browsers still do not support the typescript syntax as of today, you need a transpiler.
I did not make a statement about the typescript compiler/transpiler nor mypy, I made a statement about the typing module which was brought about by PEP 484 (proposed in 2014) and accepted in 2015 a full 3 years after the typescript compiler came on the scene. JS got gradual typing first, even if it was through transpilation -- there are even earlier examples of syntax enhancement (though it was mostly sugar) with coffee script from even earlier.
JS is undeniably more innovative in this aspect -- it's also made more mistakes as well.
It doesn't matter what browsers support, because the code is transpiled. The point is that you have access to better tools as a developer while writing code.
Is your point that python have been specifying their types in their code in any sort of numbers since mypy was invented? Because I sure don't see that on slides PyCons over the last ~5 years.
> Python has supported non blocking processing as far as 21 years ago with threads. It's actually the language of choice for pentesting a networks because it's so easy to implement a custom service scan.
> Python then, in 2002, saw the rise of Twisted for pure asynchronous I/O + callbacks, as it was for JS and AJAX at the time. Ansyncio came way later, yet Python implemented async/await and yield before it was part of JS.
There's a difference between what's possible and what's easy/facilitated well by the standard library of a language. I did not imply that python could not do non-blocking processing. My point was that these patterns were not easy and were not standard -- and the community floundered for years between alternate solutions (gevent/twisted/tornado), eventually creating asyncio while NodeJS has had support for this feature built in from the start.
Also I'm not sure that pentesting stat is correct -- ruby is extremely popular and so is/was perl. Scripting languages with decent standard libraries and easy semantics are popular, not necessarily non-blocking processing support.
> Python implemented multiprocessing as well, which allows to leverage multiple cores, more than 10 years ago. I believe producing the equivalent result is still hard to use accross all major browsers, and nodejs, consistently. The Python multiprocessing pool has even not changed API when we migrated from Python 2 to 3 and works with both (I just tried it).
Please see the other comments. No one is saying multi processing is impossible in python, I specifically called out that it's done by starting other processes. There are subtle differences between how node worker threads and multi-process setups work, in particular the shared memory space and communication mechanisms. The GIL is also an issue.
> I think you should try to put in production 4 or 5 other languages before stating such a strong opinion. Your post feels more like over enthousiasm due to lack of experience than a real analysis of the state of dynamic programming languages.
> There are strong qualities to Perl, PHP, Python and Ruby. The fact they are so popular without having an accidental absolute monopoly on the most extraordinary plateform of the world (the Web) should at least give you a hint of that.
So your point is that everything is different and everything has it's own benefits? Well that's impossible to refute so I'm just going to restate my point, maybe you can try and refute it directly.
Javascript has the accessible (and thus widely adopted) and easy to use implementation of the following features, today in 2019:
- Rigorous and powerful compile time type checking
- Parallel computation (without spawning another process or the effects of a GIL)
- Asynchronous IO
The combination of these three features being accessible is what makes Node stand out in front of Perl, PHP, Python, and Ruby.
Python may have "had" the capability to do the things above at various times in the past, but it's technically been capable of doing just about anything, like any other programming language -- you can launch sub-processes from bash, doesn't mean you should use it for large applications. What makes a good language is accessible and easy to use features that make you both correct and productive.
Node has more of these tools available, and they're more accessible in Node than they are in Python, and more people are using them. Typescript offers more robust type support than typing+mypy. Node worker threads, though very recent, provide parallel computation without child subprocesses, in a shared memory context with no GIL. Asynchronous IO is second nature to JS developers, while that's definitely not the case for python developers.
> I did not make a statement about the typescript compiler/transpiler nor mypy, I made a statement about the typing module which was brought about by PEP 484 (proposed in 2014) and accepted in 2015 a full 3 years after the typescript compiler came on the scene. JS got gradual typing first -- there are even earlier examples of syntax enhancement (though it was mostly sugar) with coffee script from even earlier.
> JS is undeniably more innovative in this aspect -- it's also made more mistakes as well.
JS has none of that. Typescript has it. They are not the same thing.
I'm comparing an external tool with an external tool. Mypy could type check Python before it was officially in the stdlib, just like the typescript type checker can make it so. Type annotations are not part of JS.
> It doesn't matter what browsers support, because the code is transpiled. The point is that you have access to better tools as a developer while writing code.
It does very much. Transpiling is not a free operation, and comes with a lot of pros and cons.
It's also not a universal operation. Most JS projects do not use typescript. Many projects still don't even use a transpiler at all.
> There's a difference between what's possible and what's easy/facilitated well by the standard library of a language. I did not imply that python could not do non-blocking processing. My point was that these patterns were not easy and were not standard
JS async were not easy neither at the begining. We had a decade of callback hell, then another war about the promise implementation, all that wrapped up in code to make browser differences manageable. Not to mention the 2 nodejs forks in it's small lifetype, JXcore rewritting the entire async model.
Let's also not forget we managed to scale web services way before NodeJS was a thing, with dozen of languages.
Yes, making an async HTTP request is more natural in JS than in Python. It would be a shame, JS is a Web oriented language while Python is a polyvalent language. However, I have yet to witness a service where this would have made such a huge difference all in all. Even facebook use long polling if I recall, something you can do with threads easily on the client side, and support with a proxy such as nginx as a front end even with nodejs as a backend.
> Javascript has the accessible (and thus widely adopted) and easy to use implementation of the following features, today in 2019:
> - Rigorous and powerful compile time type checking
Not JS. Typescript. And typescript and mypy came out together.
> - Parallel computation (without spawning another process or the effects of a GIL)
Web worker consume a similar order of magniture of resources than a python instance, and communicate data by message passing the same way. And no GIL.
The difference ? Python came fist, and has a more consistent support (you need fallback in JS if you use it in a website). I don't think we should be proud of it, but it's not the middle age you are talking about.
> - Asynchronous IO
> The combination of these three features being accessible is what makes Node stand out in front of Perl, PHP, Python, and Ruby.
The advantage of JS is historically a better async API. Let's not overstate this though.
And even if JS were vastly superior in all those area (which is, again, overstated), it hardly is enough to draw the conclusion that:
> as the best feature set of the "scripting language" tier of interpreted programming languages
Readability, ease of automation, rich stdlib, ecosystem, ability to deal with formatting/parsing/date handling, community culture and so much more are as important, if not more important, for a scripting language. Scripting, after all, goes far beyond the Web.
> Python may have "had" the capability to do the things above at various times in the past, but it's technically been capable of doing just about anything, like any other programming language -- you can launch sub-processes from bash, doesn't mean you should use it for large applications. What makes a good language is accessible and easy to use features that make you both correct and productive.
Youtube having been rewritten in Python proves that it was more than being capable. It also powers instagram and reddit.
When google rewrites their Python services for perf, they don't go for JS, they choose Java, C or Go. That's because again, Python is at the same order of magnitude of perf than JS on average.
> Node has more of these tools available, and they're more accessible in Node than they are in Python, and more people are using them.
Yes, JS is a Web language. That doesn't make it the ultimate scripting language.
> Typescript offers more robust type support than typing+mypy.
This is a gratuitious claim. Espacially since typing is in the stdlib, while typscript annotation are a syntax error in Emascript.
> Node worker threads, though very recent, provide parallel computation without child subprocesses, in a shared memory context with no GIL.
Objects passed to web workers are serialized and deserialized, just like with python multiprocessing.
> Asynchronous IO is second nature to JS developers, while that's definitely not the case for python developers.
Doing Ajax call being the based of today's web, the contrary would be alarming.
> Transpiling is not a free operation, and comes with a lot of pros and cons.
You could say the same for languages that require compilation, it's not exactly an uncommon pattern. JS will almost always have some kind of operation applied to it when running on the web (minification etc) so adding TypeScript transpilation is often a very small step.
Is that ideal? No. But Python doesn't even run on the web, so of course it doesn't have these issues.
> developers that don't think specifying types is helpful are still early in their careers (those who suffered in the dungeons of Java get some slack)
I'm not high-confidence about this heuristic because 1) the "class label" is a bit subjective and 2) I don't have a massive sample size, but attitudes toward typing is one of the best filters for engineer quality I've come across.
(No, I haven't put this in practice in interviews, and I don't ask any related question. It's just a correlation whose strength I've found really striking)
It's part of a broader class of heuristic I've been noticing recently (again, still a very preliminary, low-confidence model): things like "lack of reliable, quality devtools" and "inability to customize interfaces" bother highly-productive engineers more than less-productive ones, which is what you'd expect given that inefficient interfaces to their work has a far greater cost to the former. Along these same lines, if (eg) you're already pretty bad at understanding complex systems by reading code, you're much less likely to notice or care about how much more difficult dynamic typing makes this.
I should be clear: this isn't a general-purpose argument against dynamic typing, and I'm not suggesting that the situation is as simple as "dynamic typing bad, static good". They both have their strengths and weaknesses, which make each appropriate for different situations; I ended up picking Python for a company whose tech org I was responsible for building, and as much as I personally find engineering in Python to be horribly unpleasant, it was the right choice for the company given the environment it was in, and I would make the same choice in the same situation.
I don't think my theory above is particularly novel, and I understand the self-serving bias that it aligns with, which is part of why it's still such a low-confidence model. But I've been unable to wrap my head around the number of nonsensical defenses of dynamic typing I've seen over the last few years, and this model is the first to offer an explanation: if you can't tap into the benefits of static typing, of course the accessibility and flexibility of dynamic typing is going to seem like it far outweighs the foofy, theoretical benefits that static typing brings to stability and comprehensibility.
> It's part of a broader class of heuristic I've been noticing recently (again, still a very preliminary, low-confidence model): things like "lack of reliable, quality devtools" and "inability to customize interfaces" bother highly-productive engineers more than less-productive ones, which is what you'd expect given that inefficient interfaces to their work has a far greater cost to the former. Along these same lines, if (eg) you're already pretty bad at understanding complex systems by reading code, you're much less likely to notice or care about how much more difficult dynamic typing makes this.
I think for me it is the other way around actually: I believe I'm worse than the average programmer at keeping complex states in my head, thus I push heavily for high quality code and tools to reduce the times I have to fight through particularly nasty debugging / refactoring sessions .
This is the exact rationale I expect experienced developers (at least the ones I'm capable of recognizing) to have -- I look for architecture, solutions, and tools that make it impossible for me to make mistakes, instead of trying to rely on having tons of discipline. Pay the cost once, then never worry about it again.
No one ever talks about it, but one of the most valuable things I've learned to start doing is writing pre-push (and sometimes pre-commit) hooks into my projects, adding the corresponding `dev-setup` or whatever command to put them in place, and running linting & unit/integration tests before every commit/push. You could solve the problem of committing/pushing buggy code by just "writing better code", but with this methodology that problem completely evaporates.
> I think for me it is the other way around actually: I believe I'm worse than the average programmer at keeping complex states in my head
If you think you are actually good at keeping complex states in your head (not just better than the average human/programmer) and therefore don't need high-quality code/tools, you are setting yourself up to be destroyed by your own hubris. There are no humans who are good enough at keeping complex states in their head to actually understand the gigantic programs that we work on these days. Back when a few hundred kb of memory was a lot, some people might have managed it, but not anymore.
Reliance on high quality tooling to take care of the minutiae for you in a principled way isn't something that goes away for highly-productive engineers. If anything, IME (and in line with my GP comment), it grows. If you can hold X units of complexity in your head, and you can map Y units of functionality per unit of complexity, then any improvements in tooling that increase Y will be more effective for those for whom X is higher.
You're moving more quickly and at higher levels of abstraction, so getting dragged down into the mud of reading every line of some shitty code to figure out an ambiguous type or side effect has a far greater cost (the lack of const or anything similar makes it even worse, since you can't write off a function that takes an object even if your sphere or concern is just that object). By contrast, if the functions you're writing are trivial and you read and understand code line by line, dropping into the stack just blends into the rest of the code-reading (or so I assume).
(I still remember inlining an absolutely grotesque type-hint in order to get some ammunition to convince someone to refactor his sewage spill of a function. I wish I had written it down, but the type hint had something like twelve tokens and three levels of nesting at it's deepest point. Even that guy wouldn't have written that code if he had had to write out the type during development, or at least I like to think that. Shudder)
The rabbit hole goes very, very deep when it comes to engineering talent (or lack thereof...). You're probably a much higher-percentile quality engineer than you think, as I've met people working in industry whom[1] I had to fight tooth and nail to institute the absolute basics of common-sense engineering practices. The fact that you're aware of the value of static typing is a pretty good signal IME, and as I said, hoping that typing et al will "save you from yourself" just means you don't have the brain of a computer, and isn't a habit that talented engineers lack.
[1] the long and short of it is that I took an ill-advised shot at being the first employee of a company with two non-technical co-founders; I wanted something very specific out of the opportunity so it was fine for me (or so I thought...combining founder psychology with two non-technical founders is a freaking nightmare), but I wasn't willing/able to tap people in my network, all of whom were at much better organizations doing much more interesting things and getting paid tons more. So I got a nice look at the underbelly of the engineering labor market, enough to make me stay far, far away from small startups unless I have a very, very good reason to want to work with them or very good assurances that they'll be able to hire at at least the baseline level of a big company.
I've never really understood the advantage of dynamic typing. Not having to specify type to me seems like it's barely an advantage and immediately gets outweighed by the fact that you don't have compile time type checking (or have to jump through hoops to get it).
Typing out code tends to be a minor part of writing software, so you don't gain much there. Furthermore, you don't want to mix types in the same variables either, because this comes at a cost of performance and complexity.
So, what kind of flexibility do you get from dynamic typing?
As someone who is very pro-static typing, the big annoyance for me is that sometimes you have to write a load of boilerplate purely because of the type system.
It's got a lot better in the last decade, but with the more advanced features of a language you can find yourself slowed down in a few ways. Maybe by having to explicitly declare types where you'd normally implicitly declare them. Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing (yes, C# has an actual NotImplemented exception because this is so common).
It's annoying and can interrupt your flow, mean you have to spend a bit of time searching for the right voodoo code to make it work or you sometimes feel like you're doing a rain dance just because of the type system.
The other thing is that you're sometimes forced to cast a variable to a type that you know is already that type, but a method has returned it as an 'object'. It feels really clunky, e.g. event listeners.
> Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing (yes, C# has an actual NotImplemented exception because this is so common).
I would heavily argue that if you encounter such a scenario you are probably Doing It Wrong® or you were unfortunate enough to inherit/have to work with some badly designed code.
In an ideal world maybe, but there's no point fully fleshing out an interface where you know you're not using those methods. I'm old enough and ugly enough to comprehensively know what I'm doing.
It's simply because interfaces in a framework are occasionally quite big and unwieldy, I'm sure their developer thought it was for good reasons, the other option is to split the interface up and have a class inherit from a ton of interfaces, which is just as annoying.
There's a few other replies here I'm rolling my eyes at because real-world programming is messy and being too dogmatic with ideals simply isn't practical.
Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing (yes, C# has an actual NotImplemented exception because this is so common).
> Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing ... The other thing is that you're sometimes forced to cast a variable to a type that you know is already that type, but a method has returned it as an 'object'. It feels really clunky
These are not "problems with static typing", though. Dynamically-typed languages have you do these things all the time, it's just that you don't even notice it anymore because it's so common and you don't have static types as an alternative there.
I’ve hated dynamic typed scripting languages for over two decades and could never understand why anyone could possibly advocate for them. But, I was “forced” to learn Python within the last year and it was also my first time doing any type of automation, mostly with AWS. Before then, I only did “real programming”.
I have to admit for short programs without a lot of business logic where you are just providing a little glue - ie every time a csv file hits S3 run this lambda that executes a sql script to import it into a Redshift (OLAP) database, transform it with some queries, export the results back into S3, import it into Aurora (Mysql) and then load it into this appropriate tables - Python is my go to language out of the three “approved” languages (c# and JS being the other two).
I still don’t know why I like Python but hated Perl and PHP. But, I do know why and when I reach for Python over C#.
- when I know that more than one developer won’t be working on the same code and you don’t need the benefit of strong typing to help ensure that changing the signature one place won’t effect code somewhere else.
- Little conditional business logic. When you can easily gain complete code coverage just by running the script.
- small program that I can easily reason about.
- in the case of AWS and lambda, you can edit and test code right in the browser for both Python and JS. If you don’t need anything but the AWS SDK (Boto3), there is nothing to package. Once you get your code right, you copy and paste your code from the AWS consuls, export your configuration as a CloudFormation snippet and create your CI/CD pipeline.
It’s also easier to debug directly on the server without having to either have a compiler on the server or trying to set up remote debugging.
Until about a month ago, I had only written a REST API in C#. I wrote my first service in Node/Express just because I wanted to be one of the cool kids. I hated every second of not having a strongly typed language especially with my JSON objects. The tooling by definition can’t be as good especially for refactoring. Yeah I know about Typescript and I’ve played with it. I didn’t want to go down that rabbit hole and try to get buy in from the company.
On a side note. I’ve used Mongo before but only with C#. With the C# Mongo LINQ driver, you are working with strongly typed collections and strongly typed LINQ queries. I can’t imagine the hellish landscape of using a weakly typed language and a weakly typed data store.
So I probably agree with you more than you think, but I'm trying to tilt the scales towards dynamic typing a little to counteract any bias I may have from the fact that all my work with static typing has been with talented engineers and all my work with dynamic typing has been with garbage ones.
I agree with you about there being no concrete advantage to not being forced to specify the type of something; honestly, I don't even like type inference in eg C++ all that much, except as syntactic sugar over really ugly template types for whom the type just pollutes the code without adding any signal. But for example, I don't even really see the advantage of using auto in:
for (Widget widget : widgets) {
That being said, I do believe there is a concrete flexibility advantage, that I pray you never need to take advantage of but that sometimes can't be avoided in seed-stage startups. There were a couple of times where I monkey-patched the definition of a function (including an external library once! Oh the shame...) in order to get something working for a sales demo, and we just wouldn't have had the velocity to take shortcuts like that in a language like Java or C++, though I would personally make the fixing of the hack a P0 or at least a P1. These kind of "intentional bugs" allow you a bit more flexibility with using tech debt, and in the hands of skilled enough engineers, can provide you with boosts of velocity when you need it in the short term (at the cost of overall velocity).
Similarly, some pretty nifty stuff in libraries is possible due to the flexibility of these languages. I'm not a huge fan of frameworks in general, but my understanding of Django's ORM is that it compares quite favorably to ORMs in other languages. Given the amount of times I needed to do database surgery on prod (did I mention how much this company sucked?), it was a lifesaver to 1) have such an expressive interface to the DB and 2) have both an interpreter in the same language as our code and the ability to import and run production code. I'd imagine that pretty much every part of this would be more difficult in a less permissive language.
I’m just started to get into Python and I generally don’t see the point of ORMs since they really don’t add much in my opinion. I’m really curious about Why you like the ORM for Python.
The only ORMs I’ve ever found useful are those based on LINQ because LINQ makes any ORM (and whatever you call the Mongo LINQ driver) first class citizen and it integrates into the language.
I have the same rough opinion of ORMs in general and I've never had the occasion to examine it: it was the right choice for our org because there's no way in hell the engineers were going to learn to write good SQL when they could barely write good Python, especially when I'm far from a SQL expert. I was at the time capable of writing good SQL (presumably after a ramp up period), but I doubt my capability to teach it well without significant time investment. The ORM was a path of least resistance thing, providing a simple, easily-comprehensible interface in the language they were already writing.
In the specific context of stuff-is-on-fire, production database changes (I'll reiterate, this place sucked), the fluency of the ORM and its seamlessness with our code was helpful. I don't know much about LINQ, but I gather that the syntax is declarative, which is a perhaps-minor speedbump that would nevertheless have made me less fluent in the specific above-described scenario.
In the general case of engineering, LINQ looks like something I would far prefer.
This isn't a criticism of your comment or anyone else's, but I have to say I'm amused by how all of my responses downthread of my comment are in defense of me being charitable towards more permissive languages (as I said, my bias probably swings solidly the other way, and if I never engineer in a permissive language again, I'll die happy). I kind of assumed it would go the other way, with HN jumping on my comment as elitist or something.
For instance if you write the following using LINQ:
var x = from c in customers where c.Age > 65 && c.Sex == “male” select c;
var results = x.ToList():
If customer represents an in memory list, it gets translated as regular old compiled code.
If customer represents a sql database table - it gets translated to SQL and run on the database server.
If customer represents a Mongo collection. It gets translated to a Mongo query.
Of course you can do stuff like switch out what customer represents at runtime and it will be translated appropriately. You also get autocomplete help and compile time type checking.
> Similarly, some pretty nifty stuff in libraries is possible due to the flexibility of these languages.
You pay for that "flexibility" quite directly in lack of verifiability and of adequate performance. It's not a free lunch - if it was, we would all be using LISP anyway.
If those were the reasons for LISP's lack of popularity, then LISP would have been more popular than Python or PHP.
What happened is that 1980s hardware was not ready for languages slower than C. C became the dominant language, and C-style syntax became the dominant syntax, with C++, Java and JavaScript following its lead. Eventually people adopted languages slower than C, but they didn't adopt paren-prefix notation.
People used slower-than-C languages throughout the 1980's and onward: for instance BASIC-s (eventually Microsoft's Visual Basic starting in 1991), Ashton-Tate's dBase and its derivatives ("xBase").
Yes, I didn't say it was a free lunch. The entirety of my comments here are about how there are pros and cons to every language that manifest in different situations, and things like "ability to go into ugly technical debt for short-term speed boost" being a pro has nothing to do with the claim that it's an unalloyed good, but there are situations that play to its strengths and weaknesses. I hate to be rude, but I'm really not sure how you thought I hadn't already made this point if you had read the thread we're on. If anything, my original comment comes close to saying that static typing is superior, period (though the same caveats apply about different situations requiring different sets of strength and weaknesses).
The obvious example here is that I unquestionably do my personal scripting in Python instead of C++, despite very much enjoying engineering jobs in the latter to those in the former. Different situations, different requirements.
> I've never really understood the advantage of dynamic typing.
It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.
The focus is now on writing bug-free programs as best as we can, but I remember that when I started programming about 15 years ago there was still a vibe of "let's make it as easier as possible for people to program so that everyone can write code, bugs don't matter (at least not that much)". That was the general vibe behind HTML and browsers (the way that you can see the actual HTML "code" that displays this very page, even if it may be "broken" HTML), behind early JavaScript (anyone remembers how one was able to include different JS-widgets on one one's HTML page?) and behind very cool projects like One Laptop per Child [1] (which got killed by Intel and MS among others and which, presumably, was allowing its users to see the Python source-code of each program they were using). Those were the dying times (even though I didn't know that at the time) of "worse is better".
But in the meantime the general feeling around writing computer programs has changed, correctness and "bug-free-ness" are now seen as one of the main mantras of being a programmer, and the programming profession itself has built quite a few walls around it (TDD was the first one, "use static types or you're not a real programmer" seems to be the latest one) so that the general populace can be kept at the gates of the "programming profession". In a way that suits me, because I'm a programmer and less people practicing this profession means more potential guaranteed earnings for me, but the idealist in me that at one moment thought that programming could (and should) become as widespread as reading stuff like books and newspapers feels let down by these industry trends.
> It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.
I really don't understand this argument. What typed languages have you used?
Decent type system are able to infer most or all of your types. Meaning you ge the benefits of a type system without having to explicitly type everything.
Also, when you code, is your bottleneck serious how fast you can type? If it is, I would suggest you need a different approach.
> But in the meantime the general feeling around writing computer programs has changed, correctness and "bug-free-ness" are now seen as one of the main mantras of being a programmer
We are engineers, or at least we should be. Correctness has to be a goal, although I feel like even saying that is laughable right now. The amount of effort to prove a program is correct is extremely high for anything nontrivial. Robust type systems at least help cover our bases and help us reason about the structure of things, and more powerful type systems allow you to push even more invariants to the type system.
> Also, when you code, is your bottleneck serious how fast you can type? If it is, I would suggest you need a different approach.
My brain can process only so much information, the less thing it has to process (like the less type declarations) the better.
> We are engineers, or at least we should be. Correctness has to be a goal, although I feel like even saying that is laughable right now.
That’s what I was trying to explain, in a very convoluted way, that back in the day programmers were not intrinsically viewed as engineers, and that that viewpoint was seen as a valid one. Engineers were seen as building cathedrals while programmers (or hackers, if you will) were seen as building bazaars and other random stuff (like forums written in Lisp-like languages like Arc).
> My brain can process only so much information, the less thing it has to process (like the less type declarations) the better.
Ignoring that I've already mentioned inference that makes this a non-issue. My other problem with this is that just because you don't have a type system or don't have to write types, doesn't mean that the constraints of that 'information' goes away.
Your function still has a set of invariants that need to be maintained, you've just opted to make them invisible and have your program crash when they are violated. Instead of knowing statically when you've made a mistake.
>It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.
I don't understand this reasoning though. Actually typing out the code takes a fraction of the time that you spend on writing software. Furthermore, we have autocomplete and had autocomplete for a long time in statically typed languages.
I was formally taught programming about a decade ago in Python, but I naturally gravitated towards Java (and C) because of static typing. The same arguments were made back then and I just didn't get them.
As my comment near the top of this comment tree hinted, I pretty much do find statically-typed languages to be overall quite superior, but the reason that I find myself defending dynamically-typed languages all over this thread is because of the hard-line being taken by many that they couldn't possibly have any advantages in any situation. I do plenty of scripting, to improve both my personal and professional productivity, and I don't even think twice about using Python instead of Kotlin or C++ or what-have-you. I iterate rapidly on my scripts, they're not part of large engineering systems that a reader may be bouncing around, the rare bug that may pop up isn't production-critical, and the ability to be both fluent and concise just makes the code faster to read and write, given the small scope of the program. This is sort of a lazy example, because it's not far from the truth to say that scripting languages are suited to scripting and that's why they suck for engineering, but it illustrates the point that different situations can highlight or downplay the importance of the relative strengths of weaknesses of various languages and paradigms, and there are few common programming-language characteristics[1].
The problem here is that half the people on this thread are looking for a simple, low-dimensional model to cram the issue into, requiring dynamic languages to have _zero_ advantages in order to sustain their view that statically-typed languages are superior overall.
I'm not really sure how to fix this tendency, and it's by no means limited to just disputes within the field of programming, Simpler, black-and-white models are much easier for the brain to handle, so people gravitate towards them even when they don't match reality as accurately.
[1] though it is fair to say that a big chunk of the advantage of dynamically-typed languages is "accessibility to those who barely know how to code", which doesn't affect the calculus for someone who's capable of writing in either type of language
The problem is not that it takes a while, but that it takes a while when I'm in the middle of a complex task. Because of language verbosity, I might need to context-switch away from solving a hard problem.
For instance, if I am writing a complex function `frobnicate(x)` and I notice that I need to pass a configuration object to the function, I can just `frobnicate(x, opts)` in the function declaration, and `x.foo` to access the relevant option, and go back to the complexity that is still fresh in my mind.
With a verbose type system, I need to pick a type for `opts` NOW, which means I need to define a new type NOW, which will typicaly involve a new file with at least a dozen lines. By the time I'm done generating those stubs, the hard bits of the original function will have faded from my mind.
The code I write, in either case, isn't the final code. Once I have an initial working version where the complexity has been embedded in actual code instead of floating around in my mind, I can start iterating, refactoring, annotating, and so on. In TypeScript, at this point I will define a new options type and annotate `opts` with it.
Or you could just not. I mean, you could just write out your code as if you already had the types specified and then fill in the boilerplate and fix errors afterwards. That's essentially what happens with dynamic typing anyway, so the only major difference is that your IDE/editor might notice.
In a statically typed language, the code will not run until I have convinced the compiler that it should. If I wanted to explore the behaviour of `frobnicate` before committing to a design, I would still have to define all those satellite types that may well become unnecessary.
I think the main reason some people say they prefer dynamically-typed languages is because of a feature that historically correlated quite well with it: they provide almost instant gratification.
There are two parts to that: no required boilerplate (“Hello, world!” shouldn’t require more than twice the number of characters, and requiring a manual compile-link step is a no-no) and low time to first output (who cares that each value carries a hundred or more bits of type info and takes an indirection to access, as long a as that “Hello, world” makes it to the screen in <100ms)
Historical counterexamples to the claim people prefer dynamic typing are the ROM-based Basics of the 8-bit computer era. Statically-typed, but popular, IMO because they ticked both boxes.
> Typing out code tends to be a minor part of writing software, so you don't gain much there. Furthermore, you don't want to mix types in the same variables either, because this comes at a cost of performance and complexity.
> So, what kind of flexibility do you get from dynamic typing?
I'm the same, I don't get it at all. If you're going to voluntarily opt-out of automatic safety checks you better have a mind blowing list of good reasons for it.
Serendipitously, I actually had the occasion to do a code review of some Tensorflow code in C++ today and man was it narsty compared to what it would have been in Python. this is pretty much the kind of thinking of when I mentioned libraries being cleaner in ways where the trade-off is worth the loss of information.
> attitudes toward typing is one of the best filters for engineer quality I've come across
I tend to agree, and i classify this as part of a larger attention to detail / conscientiousness / desire to get things right trait which i associate with good programmers.
> things like [...] "inability to customize interfaces" bother highly-productive engineers more than less-productive ones
This i'm skeptical about. Enthusiasm for tricked-out vimrcs, heterodox keyboard layouts, and the like seem evenly distributed across the talent spectrum. If anything, there's a weak negative correlation.
But, as with wutbrodo's, it's likely these theories are partly self-serving.
> This i'm skeptical about. Enthusiasm for tricked-out vimrcs, heterodox keyboard layouts, and the like seem evenly distributed across the talent spectrum. If anything, there's a weak negative correlation.
Thanks for your perspective. We may be running in different crowds but tricked-out vimrcs (or even using vim/emacs) haven't ever been that popular around me. Even at Google, there were MacBooks everywhere. There just didn't seem to be any interest in using vim or Linux to signal that you're hardcore, as opposed to actually getting value out of it. At the company I'm currently at, the person generally understood to be the most talented guy on the team I'm joining just happens to be the only one on emacs instead of VScode. To reiterate, all of this is pretty low-confidence for the reasons in my orig comment, but within my small dataset, the correlation has been quite strong.
In my personal experience: I spent an hour writing the base of a customized vimrc one morning at Google at the start of my career. In the years since then, the process has been 1) notice annoyance or repeatable process, 2) spend 5 minutes scripting it, 3) go on with work and life and see a positive ROI usually within weeks (ignoring the much more important benefit of not interrupting my thinking with manual tasks). I do something similar but much more ambitious for my desktop (currently an i3/Debian system that's almost entirely operated by keyboard). This incremental approach has worked out _really_ well for me, and the accumulated effect over the years is a system that's very, very customized to my use-cases and needs. It's to the point that, at the job I just started at, I'm a little in shock to see people stretching a single chrome window across one massive monitor and an IDE across the other, with a dozen floating windows behind each. By contrast, I have 5-7 workspaces in my working set at any time, each one of which has multiple windows in use. Again, I'm pretty sensitive to the risk of feeling productive instead of being productive by building tools that aren't worth it, but the incremental approach generally constrains me to the stuff that I'm sure will be helpful.
(There are of course, places where the value/effort graph has a big discontinuity, and I take the time probably once every ~six months to handle stuff like that, in the form of a mini-project that I do after thinking about whether the ROI would be worth it. I also often get the chance to try my hand at new things in the course of doing so, which helps shift the balance. Eg, right now I'm working on breaking the frustrating interface barrier that Chrome presents: as much as I like the web, shifting so much into the browser and away from the OS/WM has been a giant leap backwards for usability, particularly given how obsessed website designers are with mouse-heavy interfaces)
> things like "lack of reliable, quality devtools" and "inability to customize interfaces" bother highly-productive engineers
I've seen people go down a rabbit hole of optimizing tooling to the point where it crowds out using those tools to actually accomplish anything. At some point worse is better.
Of course; if you ignore the possibility of optimization, then the claim becomes almost a taoutology. In practice however, I notice this a lot less often than I notice the reverse.
Nitpick: If you are using another language that compiles to JS, I think an improved formulation would be "JS has the best feature set of the runtimes / ecosystems used for scripting".
(I like ClojureScript and Elm more than TypeScript for web development, and prefer Python for non-web scripting, YMMV)
I think that the language tooling is really powerful.
I work in both Python and JavaScript, and I often miss Typescript’s power. I think Python still has big advantage in the libraries.
The standard library is full featured and well documented. But perhaps just as important is that libraries are also really often high quality with clear documentation.
I think there’s a lot of interesting JS libraries and experimentation. But I love not feeling a need for better Python libs, because they’re already so mature.
I think that a “tidy verse”-style project in JS to offer a “proper standard library” across the ecosystem would do wonders for the bit of a mess server-side libraries are at the moment. Something that could be as effective as jQuery was at corralling DOM API . The Node.js APIs are good baselines but not super fun to use directly...
For me personally, the huge turnoff for JS/TS is the library ecosystem, and specifically this notion that everything is better as a myriad of tiny 10-line libraries with a spaghetti dependency graph.
Python is much more traditional in that it has a rich standard library to begin with, so most third party libraries are also bigger, and it's easier to track of what does what.
> a myriad of tiny 10-line libraries with a spaghetti dependency graph.
I'm curious why people dislike this. I'd rather have lots of little decoupled libraries than huge monolithic dependencies that attempt to do everything in one big folder. The big libraries often duplicate functionality found in smaller tools, and I would rather they simply consume a bunch of small tools that can be shared across the ecosystem.
It reminds me of the oldschool pre-systemd UNIX philosophy of "do one thing, and do it well". Did everybody change their minds on this?
The JS libraries usually fail at the "do it well" part.
* UNIX tools don't get updated every week.
* UNIX tools don't get deprecated and replaced every month.
* When UNIX tools do update, the updates don't carry a risk of adding shady code from new maintainers because the original author "got bored lol" and handed the repo over to the first person who asked.
* When UNIX tools do update, the updates don't introduce unvetted and undocumented code that will, for example, start displaying Christmas decorations in your UI on certain dates.
How many UNIX tools actually come in "writing this from scratch is faster than searching for a tool that already does this, so I'll just write my own" size (except for `yes` and such)? Trick question: is trying to install a new UNIX tool similar to this [0]?
One of the aspects of, ahem, bazaar-style development that we don't talk about much is that decisions get made by whoever can be bothered to turn up. If it so happens that the only people maintaining ls are the three people crazy enough to care about maintaining ls, then ls gets crazy decisions.
Another difference is that the unix tools are a bounded and known set of utilities that work independently and compose well together and try not to overlap in functionality. Meanwhile, js modules are an unbounded set of unknown modules that are intertwined and aren't designed to work together and overlap in functionality.
There are some downsides to it. The most critical issue is the left-pad threat: A tiny, unknown dependency with 3 stars on Github ends up being part of a huge project. Nobody in JS audits every single dependency up the tree, because there are too many of them. So situations like what happened with left-pad absolutely can and will happen again.
But there are other misc issues. For example, each package is a separate download with a bunch of metadata and overhead, so on download it spends a lot more time on those two lines than it would if they were just part of the library.
I like code splitting and there's a lot of benefits to the approach JS is taking, but it's not without its faults.
The actual ideas in computing when it comes to modular systems are cohesion and coupling. Modular systems should have high cohesion and low coupling.
The objection to leftpad levels of granularity is that it is high coupling, lots of little modules inevitably requiring one another in large numbers and complex ways. And a lack of any grouping at all is actually a lack of any sort of cohesion, coincidental or otherwise.
The definition of "one thing" can vary a lot. For example, libpng does one thing: it reads PNG images. But that's a much bigger thing than something like left-pad.
Interfaces should be simple but functionality behind that interface should be non-trivial, otherwise it is not a net value add. John Ousterhout speaks about this much better than I could.
I've worked on lots of nodejs projects with 1-2k transitive dependencies. I've seen plenty of simple projects end up with close to 1 gig of files in node_modules. And almost all of those files are never used at runtime.
I'd be a lot happier with npm if tiny libraries took up less space on disk. Storing a local copy of each dependency's readme file, license file, package.json and test suite is overkill when the module itself is only a few lines long. Its also very common for sloppy package maintainers to accidentally ship binary files in their npm module (image banners for their readme.md, test data, etc.)
Npm should only download the javascript file for small single-file modules. Nodejs itself already supports this - if you have a javascript file in your node_modules directory, require() will pull it in like normal. The license might need to be prepended to the file in a comment block, but this could easily be done automatically.
The problems come in when one of your little dependencies itself depends on something else. You want a single level of dependencies. Monoliths are good for this because they try to be self-contained.
People don't know how to handle libraries that update frequently because it requires more buy-in on a specific aspect of CI/CD that folks don't often invest in (admittedly because it's not immediately obvious).
I think if we look at it another way, you're going to find that this spaghetti dependency graph is the natural state of every codebase.
When people need to do a thing that has already been done a 1000 times there are at least 3 ways:
(1) write it yourself
(2) copy/paste someone else's code
(3) import/vendor someone else's code
(1) is definitely not a sustainable choice, especially for something that is frequently done.
The real interesting bit here is that I think (2) is just a shitty version of (3). You inherit the benefits and the bugs of the code you copy/pasted, and likely understand that code less than (1), which is increasingly true the more nuanced (and probably performant/good) the code is.
I think the question is whether you can see your spaghetti dependency graph or not, and how big the strands of spaghetti are. It's reasonable but likely pointless to argue about the size of spaghetti.
JS's library ecosystem lays bare the actual nature of bazaar-style development. The thing about cathedrals is that they take a while to build, and even then without the right people you can build bad ones -- JS's alignment to the bazaar style of doing things is key to it's iteration speed, and is it's best tether to the UNIX model.
All that said, not having packages be immutable from the get-go, and willy-nilly updating of dependencies without manual review are indeed problems with the ecosystem, with immutable packaging being the only one anyone can actually fix with code.
(I guess Perl does too, but I haven't used Perl for 10 years)
I admit the promise pattern may be simpler to understand, but if you need to build a heavily evented system Node doesn't provide anything unique that can't be achieved in other ecosystems.
This is right, but only with a very limited interpretation for "support for threads", which does not include actual parallel computation. Not only is there a problem with the GIL, the only "threading" that ruby and python support natively is the spawn-another-process-and-find-a-way-to-coordinate kind (ala multiprocessing[0] in python).
Also, the async IO support via libraries are there for ruby and python, but that brought along with it significant fragmentation (at the very least in the python case, I know less about ruby) -- no one was 100% sure whether to use tornado, twisted, gevent/eventlet for a very long time.
A quote from another comment I wrote[1]:
>> If we really want to get in the weeds, we could talk about python 2.x's floundering around async IO -- IIRC Twisted/Tornado/Gevent/Eventlet were all close to equally good options (and thus mindshare was split) while node had it built in, with a virtualenv-by-default packaging system. Also there's WSGI -- the interoperability it provides is cool, but also means more moving parts than a simple node+express setup.
[EDIT] - It's a little unfair of me to decry the fact that multiple good solutions existed as a bad thing when I'd say it was a strength of the ecosystem. I do think it's valid to discuss the issues with packages that weren't compatible across interface boundaries between these async io libraries though.
> (I guess Perl does too, but I haven't used Perl for 10 years)
Same here, it's been a similar duration since I've used Perl for anything but even back then, the major difference between perl and the others was extremely good native regex support and the ability to spawn system threads that actually ran in parallel without coordinating another process.
> I admit the promise pattern may be simpler to understand, but if you need to build a heavily evented system Node doesn't provide anything unique that can't be achieved in other ecosystems.
My point was about the ease of achieving it... In the last ~3 years I've seen a wave of articles in the python blogosphere about how to properly use asyncio and all these new async-enabled libraries pop up (the old ones that relied on twisted/eventlet/whatever else are still around of course and perfectly good) -- what I'm saying is that JS has had this since the get go, and it's been thrashing python/ruby on benchmarks that favor the usecase for the same amount of time, especially if someone doesn't use one of the appropriate libraries.
NodeJS's async io story is "batteries included", and that's huge in this day and age. Whether node has the right interface or if they made it ergonomic enough is a whole 'nother thing though...
According to the Node 11 docs, worker threads are more lightweight than processes, but still heavy.
That makes me think they run in the same address space, but use separate runtimes. So it's more scalable than ruby and python threads or multiprocessing.
JRuby doesn't have a GIL to begin with so there's no requirement to use multiprocessing to host Rails apps etc.
They're an improvement over a single GIL but not by much. IIUC, they're like Racket's Places: a complete duplication of the VM but sharing an address space. The basic implementation doesn't really save much memory but it's a bit more convenient.
There's been a couple of attempts to implement subinterpreters for CRuby too.
The GIL still allows for parallel IO so typical applications never need async. Modern Ruby just runs one process per core much like NodeJS in clustered mode or on JRuby with no GIL.
Lots of things work when your people and processes are good enough. We don't need seatbelts if we never crash. That can lead to problems with scaling your team if needed, however. At some point you might have to let a moron in, and stronger typing just might be the thing that prevents a catastrophic bug.
I did Python for embedded systems for a year. Python is so expressive and well equipped, but the lack of pre-execution type analysis leaves you vulnerable to severe defects unless your testing coverage is 100%. I would prefer Java over Python just for the safety the compilation step provides(and I don't care for Java). I love programming in Python, but system reliability trumps my personal desires in this case.
Also flake8 has also helped -- it's just a linter but using it in conjunction with the typing module and running that in a watch loop in another terminal window has done wonders for me.
Strong typing is static, but usually entails more strictness (e.g. not allowing implicit nulls) and more features at typelevel (e.g. sum types, higher level types).
This really depends on definitions of strong and weak and this might be another point but C could be considered a static and weak language. Not sure of any other language like this.
> We stopped solely using strong-types after decades of only doing JAVA and C#, and it hasn’t decreased our code-quality.
How do you know this? It's exceedingly rare that anyone actually keeps track and monitors rate of defects in any application, but there have been studies done[0][1] on the ability of stronger typing in languages to reduce errors.
My condolences for your time using Java and C# -- they are often what people first consider when they think of languages with "good" type systems, but I dislike them both because what they introduce as "good" type systems are often very much not, in comparison to languages like Haskell.
It's a bit disingenuous to act like Haskell has been a performant/production-ready option as long as those languages but still, almost every time I talk to someone and we get to the topic of typing I almost always assume that their bad experiences came from those languages.
> We didn’t leave strong-typing to leave strong-typing btw, we just started doing more and more with Python.
I work in public service, we track and benchmark everything. Often way more than is needed, but that’s the consequence of a governing body that is very interested but doesn’t necessarily understand what you do.
I think we’ll see more and more use of the typing library, especially on big projects. Similarly we’re having debates on typescript for front-ends, not because we need it right now, but because it’ll probably really suck if it turns out we were wrong about not needing it.
I have years of experience with typed ECMAScript (3 years of ActionScript 3, 2 years of Java and 1 year of TypeScript).
What you're saying here and in your previous comment makes sense. From my experience, engineers who have a LOT of diverse experiences with different paradigms tend to agree that types don't reduce bug density in the code.
Types are a learning tool like training wheels on a bike; with comparable costs in terms of efficiency. Don't get me wrong, training wheels are great but you won't find them at the Olympics.
> Types are a learning tool like training wheels on a bike; with comparable costs in terms of efficiency. Don't get me wrong, training wheels are great but you won't find them at the Olympics.
That is such an idiotic comparison, I don't really know what to say here.
1.) Proper type inference means you don't have to explicitly write out most/any types, so there is no "cost in terms of efficiency". Some annotations might be required from time to time. If the type checker throws an error that means that your types don't align and that you further reason about your program - which is something you need to do in dynamic languages as well, just without any assurance in the first place.
2.) Types allow performance optimisations not available in dynamically typed languages, so again - no "cost in efficiency".
3.) In dynamic languages you don't get as much help from your tools as in static languages, so you can't exclude whole classes of bugs that your compiler finds for you.
So, where does this "cost in efficiency" occur? How is static code analysis a "training wheel"?
> "extensive" unit testing is a poor man's type checker
Wow! No!
If this is what you feel, you might have been doing unit testing wrong the whole time! Unit testing is a powerful technique that is useful regardless of your language. the first unit testing libraries and frameworks started with Statically Typed languages.
I don't think that the point is that unit testing is a poor man's type checker in general, but that "extensive" unit testing in a dynamic language may include a lot of testing that could have been weeded out by automatic analysis based on a static type system.
Just chiming in to say you're correct -- that's exactly what I meant.
For example, if you write a function that only makes sense on numbers greater than zero, then make it take a `GreaterThanZero<Int>`. It's a contrived example but people often pride themselves on writing unit tests that check for "out of bounds" cases like that, and defensive coding in dynamic languages usually starts with "make sure they gave you the right things and nothing is null".
There are cracks to slip through with Typescript (it's not as good as using a language that enforces this stuff down to the runtime -- i.e. haskell), but in general you're going to avoid these kinds of errors in the first place if you write sufficiently useful types.
Another hot take:
The value of different types of testing is as follows (most valuable first):
- E2E tests that test customer flows
- E2E tests that prevent regressions (made after a bug was discovered)
> - developers that don't think specifying types is helpful are still early in their careers (those who suffered in the dungeons of Java get some slack)
Developers who believe specifying types doesn't have any drawbacks are still early in their careers :D
My experience is that developers should really realise quickly that strictly adhering to defining all your variables is a very good thing.
But with the rise of JIRA and the pressures to close tickets with the least amount of work a lot of newer developers don't realise this or are not allowed to spend the time to do it right the first time.
> null/undefined is an inhabitant of every type in typescript, I would suggest purescript instead.
You're right -- but I've found in practice that strict null checking[0] is more than enough. Also I assume that Purescript has a similar bottom value as haskell[1] (please correct me if I'm wrong).
> I think you mean the ability to add explicit types to variable names. It always had types.
Yep -- I actually meant literally the `typing` module[2].
> Certainly.. iff you use dependent types.
You can get some good guarantees with container types (`newtype` in haskell terminology) that are constructed in a very specific way, assuming you have immutable values. For example, defining some type `NonZero<Int>` in typescript w/ `private readonly` interior value.
> Why specify types if they can be inferred? Why do work that our computers can do for us?
You're right -- I misspoke (misstyped?). By all means if inference is possible that is also fine.
Somewhat seperately, the statement stands as far as type signatures (i.e. `doThing :: Input -> Output` or `function doThing(in: Input): Output`) go -- they're great benefits to a codebase in my opinion.
scripting language to me has a definition. Maybe I made it up but my definition includes "runs immediately". So, basically, any language who's most common usage is either via "#!/bin/path-to-interpreter" or "interpreter script" or register some extension ".bat" in DOS/Windows
This fits python, perl, sh, node
It does not fit C/C++/C#/F#/Java?/Go/Swift/Typescript. Those are 2 steps at least. One to compile which generates an executable. Another to run the executable.
But I'll acknowledge I made up that definition. No idea of others agree. I get that it's arbitrary. I could certainly make something that given "#!/bin/execute-c-as-script" takes the .c file, compiles it to some temporary place and runs it. But that's not "most common usage" so I'll stick to my point as part of the definition of scripting languages. For me, if it's most common to have a manual compile step separate from a run step then it's not a scripting language.
The only criterion I've found which makes some sort of sense is whether the language designers consider "scripting" a use case to actively support and use to drive language features, or if it's a second-class citizen.
I think there's a bigger issue with the narrative here. Sure, types (and tests) help avoiding bugs, and they are definitely useful in large projects like Airbnb. However, they also add a cost that is always ignored, and it's even taboo to talk about it. You don't want to be labeled as an "anti-good-practice" person.
But for some cases, this cost is tremendously important IMHO:
- Startups: being X% faster at launching your product might mean the difference between life or death for the company. This includes Airbnb back in the day!
- Early libraries: not being forced to use a specific type makes it easier to change your mind midway and experiment. Later when the library API is more stable, sure, feel free to add types. Thinking too structured is dangerous when trying to innovate.
- Personal projects: where it just doesn't matter if it breaks in some situations and having to build a whole babel tower might discourage you from getting started for fun.
As an example, doing "horrible" things I could launch a fullstack prototype in a couple of days, or a decently working prototype in a week. Granted, I would not want to work with that code after 1 month and there would be some bugs, but the ability to get something quick-and-dirty is also very important and something categorically ignored. As an added benefit, this also allows you to iterate quickly, and throw whole pieces of code altogether because you avoid the "sunk cost fallacy" both from yourself and 3rd parties.
Please use types, and tests, and other good practices in principle. But also learn when it's good to break those! I get the chills when I think on someone who blindly believes on only using X for everything and anything.
The sweet spot would be to not have an all or nothing proposition, but to be able to work with fluidity in the beginning, and then to harden the result later on by adding in various layers of sanity checks, of which typing could be one.
It's unsurprising that Airbnb—after having succeeded—find that they could have eliminated some of their current problems by making a trade-off earlier on.
A parable: a baker has a shelf of many different flours that she uses to make different kinds of bread. Being the only person in her small bakery, she relies purely on muscle memory and habit to know which flour goes where. She experiments a lot with different mixtures and kinds bread.
Later, some breads have turned out to be more popular than others, and she's doing well. She hires a couple more bakers, and now finds that she needs to label the flours, write down recipes for the successful breads, and has to establish procedures and guidelines for restocking. Different people have different habits, and muscle memory transfers poorly from person to person.
When she was starting out, she benefited from short feedback loops. The shorter, the better. Later on, the benefits of short feedback loops wouldn't be any diminished, but the need for reproducibility has become much larger than it was at first.
Can you provide an example where static typing slows you down compared to dynamic typing?
I've been using static typing for decades and the only problem I can imagine is having to add an object property definition instead of just using it. And I can't imagine this extra operation being a significant drag. Is it for you?
This is my question as well. I think about types even when I don't explicitly declare them.
"Function x will return a list of a"
"Function y will return an instance of b or null"
"Function z will return an instance of c or throw an error"
How much time am I losing by putting that information in the function declaration? I doubt it's more than the time I lose when a function returns or throws something it wasn't supposed to.
Right, you’re treating the code as statically typed even when it’s not. In that case you never gain the advantage of dynamic languages.
Thinking so strictly about what types a function will receive rules out making use of things like decorators in python that operate on function arguments regardless of type.
Or generic validation utils that take many types and return the passed in type after performing internal validation logic relevant to the type.
"Takes many types" ≠ "cannot be typed". TypeScript supports decorators, union types, and `any`/`unknown` types.
Those generic use cases are good examples of flexible types, but I have a hard time seeing how
* "I don't know whether the `getMyReplies()` function returns a list of items, an iterator, a non-rewindable generator whose results I need to cache, or a closure with/without memoization" and
* "I don't know whether the singular Reply entity has a field called pid, parent_id, parentId, ParentID, ParentGUID, or parent_message_identifier"
For code written in JavaScript you check the source code to see what arguments it takes and what it returns. If it's not written in JavaScript, for example a Browser API's or NodeJS built in module, you read the documentation and use console.log's. Then you rely of good naming, for example (nrA, nrB) => sum vs (a:Number, b:Number) => c:Number where the one without static types is more clear of what the function does and what it returns.
There are not many reasons to read the code of your dependencies, you will learn a lot doing so. One issue when you work with the compiled code is that the type annotations have been removed, and that type annotations leads to more terse code as it would otherwise become too verbose eg. number:number tends to become n:number and when the type is removed it just becomes n, so you kinda become dependent of your tooling and can't easily just stop using it in favor of something better.
Languages like C and C++ made it readable with debug symbols.
The web stack does the same with source maps. The TypeScript compiler also offers to emit .d.ts files with type information for reuse in other projects.
> In that case you never gain the advantage of dynamic languages.
Treating the code as statically typed makes the code predictable, less prone to bugs, and easier to maintain. If a function's return type is determined by dynamic factors at run-time, it becomes a maintenance and debugging nightmare; the effect is compounded as more and more unpredictable dynamic functions are chained together.
> operate on function arguments regardless of type
parameterized types and other techniques allow you to describe this type of behavior in a type safe way.
> Or generic validation utils that take many types and return the passed in type after performing internal validation logic relevant to the type.
> Right, you’re treating the code as statically typed even when it’s not. In that case you never gain the advantage of dynamic languages.
Both the things you mention can be done in statically typed languages. The only difference is in dynamic languages you don't have to declare interfaces, subclasses, etc. I just wonder whether that really is as much of an advantage as people think.
I've spend years using both types of languages and I think I think the safety of statically typed languages is usually worth the slight amount of friction it adds.
There's a gap between things it might make sense to consider as data contracts, and things that can be expressed in practical type systems in common use.
One example: a function which, when given a single element, returns a single transform of that element. When passed a list of elements, returns a map of that transform over the list. You can argue as to whether that's a good idea or not, but I've seen JS functions that make sense in context which do that.
This might just be my lack of type system expressivity knowledge, but how would one annotate the type of that function? I get that you could use a union type on the param and the return value, but not how you link the two.
Yes, this is very common in JS, and somewhat common in other popular dynamic languages. In TypeScript, for example, you'd implement this by defining several overloaded signatures. Note that this isn't C++ or Java, so you still have a single function implementing all of them - the various overloads just specify what happens depending on the arguments you pass.
This is a limited approach, because the overloads aren't a part of a function type - i.e. you can't have an anonymous function value that is overload-typed, it must be a named function. In principle, a similar mechanism could be used for types as well, it just doesn't usually come up in that context (e.g. callbacks typically don't return different things), which is probably why they didn't do it.
Your statement is just a tiny bit too general and that's where you're missing.
"What about their input and output is" => "what type their input/output is"
The next question is what does the type of the input/output actually tell you about the input/output? In the most popular languages (using interfaces), it basically tells you that the input/output might have a, b and c set (or they might be null), and it could also have anything else set.
So the actual information you have is that you might or might not have (a,b,c) and also might or might not have (anything). Not as foolproof as you think. The natural objection is that aha! I defined this class to require (a,b,c) in the constructor and do a nullity check, so I know much more than this! But this is back to having "types in your head", that constructor signature and nullity check isn't compiler enforced (as in if you change it, nothing will fail in the typesystem at the point of the function that we're checking, as long as those fields remain on the class). So the useful part of the static typing is actually living in your head either way.
>Can you provide an example where static typing slows you down compared to dynamic typing?
I spend an annoying amount of time trying to compare/add/multiply... different types in C++ like time, durations, dates, ints, floats in my day job.
> And I can't imagine this extra operation being a significant drag.
If you have an open source C++ web framework that you think is comparable in features to python django or ruby on rails, I would be very interested as I do want the performance improvements but all the ones I have looked at look like way more work to implement what I want than using django/RoR.
> I spend an annoying amount of time trying to compare/add/multiply... different types in C++ like time, durations, dates, ints, floats in my day job.
Multiplying ints and floats "just works" in C (although you have to mind overflow, but that's not a type issues).
Other examples don't make a lot of sense. E.g. why would you need to multiply time by duration, or compare one with the other? They're different quantities, reflected as such in the type system. That it won't let you do something like that is the whole point - it would be a bug if it did (and if not, then the data types were chosen wrongly).
OTOH, adding a duration to a date is semantically meaningful - which is why std::chrono has operator+ overloaded for various combinations of time_point and duration.
That is kind of my point. When I use C++ in my day job, there are highly optimized libraries/classes which makes sense considering the amount of work that needs to be done.
However, if I am just trying to do some quick data analysis and the data set is small, I just want to do some quick math on some basic data and not have to write code to convert between every single type under the sun.
With dynamic types, I can quickly convert data between dictionaries, sets, lists and what not and run it extremely quickly. That is much quicker than trying to compile and run code to transform data into the results I am trying to create.
These people reap the greatest benefit, because they are least familiar with the codebase and get the most out of having the computer check their changes make sense.
Totally agreed. I'm not saying that they would not get benefits! I'm just saying there's also a cost and that might be the difference from someone continuing learning or giving up.
Depends on compile/deploy times really. 5 minutes per cycle is painful. 30 seconds not so much. You often just compile the part you need and that can be very fast. Depends on the language too. I've found .NET turnaround times faster than Java for example.
There is a cost to breaking up your logic into various modules/packages on the file system when you could simply write all your code in one gigantic file and not worry about any of that. Just because it saves time doesn't mean it's the best choice.
As a user if both Haskell and various LISPs since 1990, I would say I personally am as productive in both strongly typed and dynamically typed languages, assuming I have a match between my problem and my platform, and avoid overengineering my types (heh).
I am also equally productive in what I consider weakly typed languages (hello, Java), when I can avoid boilerplate hell.
Bottom line is I think the productivity of the developer is more a function of the familiarity with the language, platform, tooling and problem (and proposed solution) than how strongly typed the language is.
I do find that having a REPL improves productivity in essentially all cases.
Exactly. In fact, I find dynamic types to be much slower to work with in the long-run.
For example, with a statically typed language, just one look at the method signature is sufficient to know exactly what it has to be passed. In the worst example of dynamic typing, in Javascript I don't even know how many arguments have to be passed. I probably have to look at the API documentation, and hope the author gave details about what the function is expecting.
And compile-time errors are much cheaper than runtime errors.
It could just be that the devs on a team don't know how to use those tools yet. Having to learn how to use TypeScript effectively while establishing the patterns that underpin a new project would definitely slow you down. Maybe that's a cost you can bear early on, but maybe it isn't.
- Slow debug cycle due to increased compile time. If using TDD, it takes longer to run tests. The delays adds up - Especially on large projects and especially if you're not used to having a build step during development.
- Sporadic source mapping issues/bugs which makes it hard to fix things (I had an issue with Mocha tests last week where it was always telling me that the issue was on line 1). I've worked with TypeScript on multiple projects across different companies but every single time, there has been source mapping or configuration issues of some kind.
- Type definitions from DefinitelyTyped are often out of date or have missing properties/methods. It means that I can't use the latest version of a library.
- Third party libraries are difficult to integrate into my project's code due to conflicts in type names or structural philosophy (e.g. they leverage types to impose constraints which are inconsistent with my project requirements).
- Doesn't guarantee type safety at runtime; e.g. If parsing an object from JSON at runtime; you still need to do type validation explicitly. The 'unknown' type helps but it's not clear that this whole flow of validating unknown data and then casting to a known type adds any value over regular schema validation done in plain JavaScript.
- Whenever I write any class/method, I have to spend a considerable amount of time and energy thinking about how to impose constraints on the user of the library/module instead of assuming that the user is an intelligent person who knows what they're doing.
- Compiler warnings make it hard to test code quickly using console.log(...); I'm more reliant on clunky debuggers which slows me down a lot and breaks my train of thought.
- The 'rename symbol' feature supported by some IDEs is nice, but if a class or property is mentioned inside a string (e.g. in a test case definition) then it will not rename that; so I still have to do text search after doing a symbol rename and manually clean up.
- It takes a lot of time think of good names for interfaces and they are renamed often as project requirements evolve. It's not always clear whether similar concepts should be different types or merged into one. It often comes down to what constraints you want to impose on users; This adds a whole layer of unnecessary mental work which could have been better spent on thinking about logic and coming up with meaningful abstractions which don't require arbitrary constraints to be imposed.
- Setting up TypeScript is a pain.
- Adds a lot of dependencies and complexity. It's a miracle that TypeScript works at all given how much complexity it sweeps under the carpet. Check the output JavaScript. It's ugly.
" not being forced to use a specific type makes it easier to change your mind midway and experiment."
Is that really true? whether the original declaration is strongly typed or not, its likely that changes to the type will result in code changes needed wherever the value is being used.
The only thing that no typings does is make it impossible for the compiler to help you find where it is being used.
Yes, through duck typing. At some parts of my code near the entry point I might not care about what the input "file" is, but later (or the underlying library) for instance might accept both a path or an object for instance.
This is specially useful for options that are not used near the entry point, but instead are passed down. If the later implementation changes, you'd have to change the middle implementation as well to accept the new/different types.
Duck typing is completely orthogonal to dynamic typing - you can have static duck typing. C++ templates are one well-known example, but OCaml's intersection of its OO system with type inference also makes it kinda duck typed - e.g. if you define a function that calls a method on an argument, the type of argument is inferred as "some object that has a method with such-and-such name and types". If you call another method, that also gets added etc. But it's still statically typed - at call site, the actual type of the argument must have those methods, or it won't compile.
Or, getting back to C++, we have polymorphic lambdas there now (which are implemented in terms of templates, but with syntactic sugar that makes it all much cleaner):
auto f = [&](auto x, auto y) {
return x + y;
};
cout << f(1, 2); // 3
cout << f("a"s, "b"s); // "ab"
cout << f("abcd", 2); // "cd"
cout << f(main, true); // compile-time error
Again, all statically typed, although the type checking basically consists of substituting the types in the body and checking if the result makes sense. Which is to say, the error messages from such code are very similar to the ones you get in e.g. Python if you have the types wrong, except here the errors are compile-time, not runtime.
The Agg C++ library defines every class as a template, taking duck-typed parameters:
- type of Agg object passed into object
- type of "algorithm" used (or similar)
I read matplotlib's Agg C++ backend. Guessing which Class A could be passed into various template parameters in Class B was miserable because of duck typing. And it also broke "jump to definition".
```
Class B<A> {
A a;
}
this->a.foo(), but you don't know what type a is.
```
> Startups: being X% faster at launching your product might mean the difference between life or death for the company. This includes Airbnb back in the day!
Developing software since the mid-80's, worked in a couple of startups, never saw a case where using strong typing would be the cause of business failure.
> - Personal projects: where it just doesn't matter if it breaks in some situations and having to build a whole babel tower might discourage you from getting started for fun.
It is no fun to go back to something that was left hanging 6 months ago and reverse engineer what the code actually does.
I've seen too many projects fail because they are way over-engineered. A complex pipeline, build system, developers not familiar with the practices, etc. all adds up. Now I do not know how much typescript might add in cost to all of this, but it's definitely part of what I'd consider an over-engineered MVP.
Dealing with bugs at runtime that would of easily been caught by a compiler.
Difficulty in understanding your own code base - functions available, parameters on those functions, the properties on the objects passed into those functions.
Refactoring - some of the most refactoring you'll do is when quickly iterating on a small application. Refactoring is super error prone, and typing minimizes those errors.
As always the typing is optional. So if it is slowing you down or some library doesn't have types, or you just don't need it in a particular case - then just use 'any'.
I've experienced typings increase my productivity in even the smallest projects. The compiler is your friend, the typings you need are often minimal and mostly inferred.
Agreed, I never said the opposite. But you'll agree that the costs of using types are mainly upfront, while the costs of not using them are mainly in the form of debt.
Except the costs of using types are usually in the form of a learning curve: i.e. when you are learning the type system it's relatively expensive, but once you've mastered it it's just part of your coding practice. When you do run into the rare issue, it's usually east to solve it.
By contrast, dynamic typing problems never go away. They're usually because of a faulty assumption, or due to a typo, or they're the cumulative hours you spend trying to figure out what exactly has to be passed into a function since it's not explicitly part of the signature, and the maintainer of the library you're using didn't document it well.
In the long term, static typing is much less expensive, and it's not accident that people tend to prefer more types over the course of their career rather than less types.
I'm sure for many people this cost is reversed and they are wasting time in dynamic language for looking up methods or fixing bugs that would not exist in typed code.
If I could give this comment a year's worth of up-votes I would.
Dynamic languages slow me down.
I have a terrible memory when it comes to method syntax, for example I would still have to look up Javascript's substring syntax every single time even though I have worked with JS for like 12 years.
Static typing's ability to supercharge autocomplete is a massive time boost for me.
Yes JS is probably the most egregious. After having transitioned mostly to statically-typed languages, it seems positively backwards not to have the types, or even number of arguments specified as part of the signature to a function. I have even had to dig through the source of a library when the author has neglected to document an API well.
Bizarre attitude. Types help you work faster. They eliminate the entire class of “undefined <something>” errors that you usually have wade through when writing js. I’m guessing that you don’t have a lot of experience with modern typed languages.
As you say I think that perception is largely due to lack of experience with statically typed languages in general, and also maybe lack of experience with projects of non-trivial complexity and/or managing production code.
Compile-time errors feel like an unnecessary hindrance when you're not familiar with them, but when you understand that they are replacing much more vexing and difficult runtime errors, you never want to go back.
I think you're generally right. One thing I can really appreciate from TypeScript, though, is that is strikes a really nice balance in compensating for point 2. If something turns out to be really hard to type, or simply too much work, the `any` type makes it really easy to opt out of typing in a specific location. Once you've settled on the API and code that works, you can introduce the typings anyway. (Although I must say: exactly for those uses where I am expecting to be refactoring soon, I find I feel much safer to experiment knowing that TypeScript will help me with that.)
Furthermore, with TypeScript's current popularity, point 3 is starting to be less and less relevant too. When I start a React project, I just run `npx create-react-app --typescript`, and I'm up and running. TypeScript support by default is getting more widespread, which can make it a no-brainer to use in personal projects as well.
The 'type system' is called spec - so the idea is that you can move fast but once you know some invariants you'd like to keep you can put them in the specification. Having the spec as your type system maintains sanity and gives programmers that are new to the code a high level overview.
It just makes so much sense, really feels like having & eating the typing cake.
There is some truth to your point. And now think about using TypeScript, that gives you the edge right from the beginning. :)
I can assure you, that you want a somewhat stable software basis when the traction hits your software product. It is extremly hard to fix (dumb) bugs during those times. Certain edge case bugs suddenly become not so edgy bugs due to more customers using your product.
For me it is an anti-pattern nowadays not using TypeScript or even React/Angular.
I just switched from python to go, and I’m the most productive I’ve ever been as a programmer, honestly. I don’t think types slow you down, personally.
AirBnB is a copy of ages-old vrbo with modifications. It wasn't a breakneck race to see who could launch code first. Coders overestimate the criticality of code to the success of a non tech business that operates on the web/apps. Airbnb isn't high tech, it's a business with modern business process of automation software.
Thanks! I'm considering writing a blog post about rapid prototyping and aggressive refactoring, which sometimes contradicts some of these best practices.
There's been some discussion regarding what these Javascript devs are calling "The Typescript Tax" ( Presumably the perceived increased cost of development in a statically typed language ). What greater 'tax' could you possibly conceive of than a whopping 38% of your development time on an enterprise product being spent debugging problems caused by type inference/coercion? You can directly quantify the costs, and it'd probably make management's heads spin.
Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming.
> What greater 'tax' could you possibly conceive of than a whopping 38%
I always find it fascinating looking at the number of lines of code which do the same thing in different languages. Foundationdb maintains official bindings in Java, Go, Ruby and Python. The implementations have almost complete feature parity. They're all maintained by the same team, and as far as I can tell they all follow best practices in their respective languages.
The sizes (minus comments and blank lines) are[1]:
Python: 4053
Ruby: 2397
Go: 3968
Java: 10077
38% seems like a lot but language expressiveness dominates that. If ruby and java code were equally difficult to write, you'd be paying a 320% tax moving from ruby to java for an equivalent program.
I'd love to see a "programming language benchmark game" that only allowed idiomatic code and compared languages based on code length.
I have no idea what the equivalent size ratio is for javascript and typescript, but having worked with both, I find typescript projects end up bigger. I write more typescript because the type system and the tooling encourages verbose and explicit over terse and clever. (In typescript I end up with more classes, longer function names and fewer polymorphic higher order functions).
The typescript tax is real. That said, I believe the defect rate increases by 38% too. Depending on what you're doing the tax might be worth it.
[1] Measured from commit 48d84faa3 using tokei "code lines". Includes JNI binding code and makefiles, but not documentation
> Java size has nothing to do with types and everything to do with the language.
Yes. I'm making two claims. First to disagree with the GP's claim that a 38% difference between languages was 'whopping':
> What greater 'tax' could you possibly conceive of than a whopping 38%
The fact that the type system isn't entirely to blame is also clear looking at the ruby and python sizes. Ruby and python have very similar type systems, but the ruby code is only about half the size of its python equivalent.
My second claim is that if we did the same comparison with javascript and typescript, typescript would be bigger. I don't have any stats for this though - just lots of experience. The type system seems to push code in a bit more of a classical OO direction because there aren't higher kinded types, associated types, or any of that fun stuff.
As others have said it'd be fascinating to compare typescript and purescript / elm codebases. I'd expect the haskell-derived languages would come out way ahead on expressiveness. Its constantly surprising to me that we don't have any actual numbers.
> First to disagree with the GP's claim that a 38% difference between languages was 'whopping'
??? You're comparing LOCs and number of bugs. That 320% larger java code might have 50% less bugs and may have taken 50% less time to write. Terse code is harder and usually takes longer to write than expressive code.
When writing in Ruby the "what the hell is this object I'm dealing with here and what methods does it have available?" Tax is like 320% for me. I love Ruby. It's so much fun and it's just neat in general but I find it much more taxing than a nice statically typed codebase I can navigate and reason about with ease.
Everything is just taxed differently. No such thing as duty free programming.
Is LOC really a significant measurement? Sure you have to type more characters when working with a strictly typed language, but that takes a couple of seconds opposed to however long it might take to track down a bug in a massive enterprise system. Not to mention the type system often forces you to consider cases you might otherwise overlook when writing dynamic code, hopefully resulting in better code to begin with—it imposes a bit of discipline.
In my opinion whatever perceived tax a type system entails is superficial—sure, coming from a dynamic language, having to specify types for everything seems like taking on an extra chore—but this is just the surface experience of using a type system—once you’ve used one for a while and furthermore, had the benefit of reading typed code, you really start to appreciate having types around and any so called “tax” involved quickly becomes negligible.
While it might be a “tax” when writing code it’s defintely a boon when reading code—it’s much faster to look at types to get a handle on an api than it is to have to dig through and read the actual implementation to try and decipher what exactly a function expects and returns. When using a Haskell library, for example, I rarely ever have to read documentation extensively or read implementations—looking at the types of functions and a brief description (one sentence) is often sufficient. It takes a couple of seconds. This is the norm with Haskell code. Contrarily, this is an exceptional case for dynamic languages—if the documentation isn’t superb you always wind up having to peek at the implementations at some point—this typically takes longer than glancing at types and a one sentence description.
Dynamic languages might be more “expressive” from a writers perspective (since you can leave out types) but they’re far from expressive from a readers perspective—since the writer convenience often puts the onus of sussing thing out on the reader. In enterprise systems, reader convenience is arguably more important.
15-50 being quite a big range and the 2004 publication date of the book (the comments hint at the reference being earlier) make me reluctant to take it at face value without looking into it in more depth, especially regarding new technology like TS, Rust, or enterprise Haskell.
In the original post AirBnB said 38% of their bugs could be prevented by static typing. This would bring this to 0.9-3%. Not insignificant, but LoC is still more important.
Haskell has the reputation that if it compiles, it works.
Whether it’s true or not, it would be interesting to consider languages in their ability to produce correct code more quickly.
While I don’t like the idea of writing 3x the amount of code, if one implementation is only 30% longer, for example, and is more likely correct on the first run, it’s still a huge productivity improvement.
This is my experience with statically typed languages. It may be a bit more verbose in the implementation, but it takes much less time to reach a correct solution.
I find that whatever "savings" there are in terms of code volume in dynamically typed languages is usually more than made up for in the need for more tests.
Java has a lot more boilerplate—or near boilerplate—than the others. For example, you probably want to implement some kind of toString method, if only for your own sanity during debugging. You can do that in Python, but __str__ is often good enough.
I also wonder how many Java "lines" are just a closing brace or something trivial.
> I'd love to see a "programming language benchmark game"
> that only allowed idiomatic code and compared languages based on code length.
You just described the old Debian language benchmark game. You could select the various coefficients to apply to code size, program speed etc. I remember it played an important role when I decided to give OCaml a try: I selected 1 for size, 1 for speed, 1 for RAM and zero for everything else, and that resulted in OCaml and Pascal, the two languages I was prejudiced against from school years. Of course I went with OCaml.
Java is a terrible exemplar for a statically typed language. It's incredibly verbose, and is often held up as a bad example for how to do typing. More modern languages manage have type systems which are more effective at identifying issues at compile time while also being more flexible in terms of the design patterns they allow.
You might have also noticed that Go is statically typed, and it's actually less lines of code than Python (maybe a lot less if trailing braces are counted in that LOC number).
>What greater 'tax' could you possibly conceive of than a whopping 38% of your development time on an enterprise product being spent debugging problems caused by type inference/coercion?
Is there a link where they say they spent 38% of their development time on this? The slide just says 38% of bugs and fixing bugs is not the only way development time is spent and not all bugs are equal in terms of time to debug.
> Is there a link where they say they spent 38% of their development time on this?
No there is not. The comment you're replying to has no factual basis for this claim.
Nor is there a basis in the article for 'what these Javascript devs are calling "The Typescript Tax"'. In the article there's one comment that refers to such a tax, but the commenter doesn't claim to agree, and every reply disagrees.
Nor is there a basis in the article for "Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming." The article says nothing about the total quantity of bugs, bugs per line of code, or other comparable metric.
The comment you're replying to seems to have a predetermined agenda tangentially related to the article.
My apologies, I was commenting very generally regarding attitudes typical of Javascript developers I have encountered in my professional life, rather than addressing the article directly. Also, you're right. 38% of development time is not the same as 38% of bugs, mea culpa, but there is an overall point to my statement about the productivity cost that I believe remains correct regardless.
You’re correct, but it’s also true that debugging is often a greater percentage of time/effort than the original coding, so 38% of bugs is nothing to sniff at.
The bigger problem is that "Simple fix" !== "Simple ramifications". Runtime issues caused by missing very simple typing bugs can bring down critical systems in Node. I worked at a company where a payment batching system built ( not by me ) with a MongoDB database failed to work correctly because the aggregation query was checking a field was `null`, without checking if it was there in the first place... No one even noticed for several weeks until someone finally investigated the low volumes. That's a very kind example too, just because they're simple to debug when they pop up doesn't mean that they won't cause massive destruction in prod.
Until they’re not. I’ve had some codebases where it took 3-4 hours to figure out that some really bizarre issue was really just a type bug. And I don’t consider myself a terrible debugger.
I don't think that's really true as a general rule. Passing a variable of an unexpected type or nullable could have completely unpredictable behavior depending on what assumptions the code makes about the incoming parameter. It could have no effect or a catastrophic one, there's really no way to know without reading actual code.
The kinds of bugs which TypeScript prevents are those which could have been caught by the simplest of integration tests using the simplest inputs.
Every time I rename or change something in TypeScript, I have to change it in so many different places. In JavaScript, changes where much more localized.
Not at all. Typing bugs often leave code looking perfectly fine until you discover that one of the variables you expected to be y was actually x. And the only way to find out is to just print out everything
Can happen with TypeScript too. There is no runtime type checking so if you parsed a JSON object and cast some unknown property `const result = someRemoteObject.someUnknownPropertyWhichIsAString as number`, it doesn't guarantee that the result is actually a number. You still need to do actual schema validation just like with old school JavaScript.
> Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming.
Why is that? There were dynamically-typed backend languages long before Node.js became popular. And you now can easily have a TypeScript-based Node.js backend, which gives you the nice benefit of having the exact same language and types for both your backend and frontend. I'm not saying the Node runtime is perfect, but I don't understand why having a runtime that lets you use JS/TS as a backend or local language is quite such a bad thing.
I have absolutely no way to back this up, but my gut feeling is that the node ecosystem's tendency towards small modules, coupled with a lack of explicit interfaces in the language, increases the number of places where a developer is likely to introduce a type error.
It is a bad idea to intermingle the problems of dependency management with the concerns of a type system as though these are somehow related concepts. If you are not managing your dependencies sloppy things will follow regardless of the type system in place. There is an unrealistic expectation that JavaScript developers shouldn't have to write code because somebody has already written it for them in an NPM package.
I don't just mean external dependencies. The culture of "small modules" also means internal modules are vulnerable to this.
Edit to add: on further thought, I think it would be a mistake not to look at dependency management and type-safety at the same time. If my external dependency exposes a defined interface, that gives me more information about that dependency which I can use to manage it (replacing it, for instance) than without.
If I import a Nuget package into my C# code that changed a signature behind my back or even one that correctly versioned the package as “we have breaking changes”, I’ll know at compile time. With an NPM package I won’t know until runtime.
Do you have integration tests to ensure your applications do all the things it claims to do and that dependencies don't break those things? If dependency health isn't a part of your product testing bad things can happen.
Another thing to consider is that maybe you might not need a given dependency. Sometimes it is faster to include a bunch of dependencies at the beginning of a project just to produce something functional in the shortest time. As a product matures there is a lot of opportunity to clean up the code and slowly refactor the code to not need certain dependencies.
I’ve found just the opposite especially going into companies with less mature engineers and “architects” who’ve been at the company forever and haven’t had much outside experience. They both tend to reinvent the wheel badly and write code for cross cutting concerns that have already been solved better like custom authentication and custom logging.
Then again, I’m a big proponent of avoiding “undifferentiated heavy lifting” at every layer of the stack.
I think this comes down to reinventing the wheel. I frequently see less mature developers misuse that term to avoid writing code or making architectural decisions. That term isn't an excuse to off load your work onto someone else's reinvented wheel. The idea is to avoid unnecessary churn, whether that means you writing original code or using a dependency, and even still the concept is not always the best course of action.
As an example using a framework to avoid touching the DOM is an external reinvented wheel. As a leaky abstraction the need for that framework may dissolve over time as requirements evolve. That evolution of change can result in refactoring or a growth of complexity. Accounting for those changes means reinventing the same wheels whether or not the abstraction is in place. That is because a framework serves as an architecture in a box, but if you are later making architectural decisions anyways you have out grown at least some of the framework's value.
I’ve seen it before where “I don’t need a framework”. Then after the project grows, the “architect” ends up reinventing the same framework badly. I have no problem with opinionated frameworks especially with larger organizations. It also makes it easier to onboard developers. As a new dev on a team, I’d much rather come into a company that uses Serilog than used AcmeSpecialSnowflakeLogManager.
I have seen that as well. Many people will complain that if you don't use a framework you will end up writing one anyways, which is completely false. For many people who have never worked without frameworks writing original code means inventing a new framework, because that framework-like organization is how they define programming to themselves. This behavior is safely described as naive, or a lack of exposure to differentiating methodologies. This reminds me of that quote: when you're a hammer everything looks like a nail.
I think what you are ultimately getting at is: How do you ensure they write code the same way?. That isn’t a real problem and so it isn’t something you need to address. This comes up because insecurity is a very real concern.
Instead provide a business (not code) template to define requirements for integration. This should define what is needed in their documentation, what they output/return, their inputs, automation test requirements, and so forth. That list delicately does not include code style or any more of vanity. Most of this can be automated but you need code reviews to hold people to account and to use as a training opportunity. Be honest and critical during the code reviews. If people find respectful honesty offensive talk to them about it outside the group and if they still can’t get with it transfer them out of your team.
So you’re suggesting that one person decides that they like JQuery on the front end and one decides that they want to use plain Vanilla JS and one decides they like the look of Material UI, the other Bootstrap, and one happens to like sites that look like MySpace that’s ok?
And on the backend, if one developer liked log4net for logging, the other Serilog, and yet another developer likes to build his own logging framework that’s okay?
Let’s take this to a logical extreme, since one person is an old school VB developer and another likes F#, let’s just do that and throw in a little COBOL.Net for good measure....
And when a new dev comes in snsr takes forever to ramp up. No big deal. While we are at, let’s not follow the REST semantics either and no one needs those pesky frameworks like Express and ASP.Net. Let’s just go bare metal...
Or you could just have agreed upon standards....
If you’re going to have standards anyway, why not just use frameworks where you can open a req for the required framework that lets you get some developers who you can make sure already know the basics?
You’d be surprised at how many cheap developers you can get to throw together your yet another software as a service CRUD app using React.
> So you’re suggesting that one person decides that they like JQuery on the front end and one decides that they want to use plain Vanilla JS and one decides they like the look of Material UI, the other Bootstrap, and one happens to like sites that look like MySpace that’s ok?
The expectation is that developers are literate in their line of work, even though most companies don't hire like that. Since developers are expected to be literate don't waste time worrying about how to write code. If a developer suggests a tool that helps them on the "how to write the code" reject the tool.
Set business principles that specifically communicate the acceptable scope of work. Such principles can be something like this:
* code will execute within X milliseconds.
* code will allow up to X total dependencies (choose wisely).
* code will not exceed X bytes total size. If they want to include jQuery their code solution just increased 69kb.
The common failing is confusion around standards. Many developers find it easier if everybody just did everything the same way so that things appear standard. That approach ignores everything relevant to product quality and the goals of the business, which is stupid. It is stupid because it sacrifices simple for easy at the lowest level of leadership by lazy or weak managers. You don't get to offload management onto a framework and still hope to be a strong effective manager. Ultimately the success of a manager is tied to the work of that manager's team whether the team can retain/promote effective people.
When you set concrete and specific business limitations the limits are easily understood and based on objective criteria. Innovation and creativity are not stifled as developers are free to work within the constraints anyway they please.
This all makes sense once you accept that the group is worth more than the individual, but the product is worth more than the group. Sacrificing business goals for group cohesion is backwards just as sacrificing group integrity for an individuals option is backwards.
> You’d be surprised at how many cheap developers
That is also a common failure. Developers are never cheap unless you offshore. In hoping for cheap you will actually get developers that cost the same, do crappy work, don't value themselves, and that aren't valued by the organization. That is toxic. Instead hire people with potential to do great things and manage them effectively.
I wouldn't be surprised if it is actually faster to develop in statically typed languages once projects reach a certain size. It makes refactoring so much easier, and it's a lot easier to reason about what a functions arguments could be. I've had cases before where the easiest way to tell what a function accepted as an argument was to just run the program and print out the input. If it was statically typed, I know exactly what the arguments are, and can easily see the effects of modifying one of those arguments. With dynamic typing, I spend a lot of time just looking at how a function is used elsewhere in the code to make sure I don't break anything.
"Tax" may indeed be the right word. You pay something up front, and in return you get benefits that tend to outweigh the costs, and which would be even more expensive to pay for directly. I say that as a seasoned dynamic typer, who finds himself wishing frequently that the X3J13 standardized optional typing in Common Lisp a bit more.
What greater 'tax' could you possibly conceive of than a whopping 38% of your development time on an enterprise product being spent debugging problems caused by type inference/coercion? Y
As opposed to 86 percent of all statistics, which are basically made up. (Or presented grossly out of context, which is pretty much just as bad as "made up").
Seriously, it takes a lot more to trigger an assessment of "whopping" significance in my mind than a side-angle photo of some random slide presented with no background context whatsoever.
Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming.
I might be wrong, but I think you have misunderstood this, or we've come to wildly different conclusions. My understanding is that they were using a dynamic language without types, and 38% of these bugs would have been prevented if they were using static types. My conclusion is that developers would spend at least 38% less time debugging type issues. This is because code editors (e.g. VS Code) have really good TypeScript support, such as autocomplete, and you get instant feedback if you ever make a mistake.
You seem to have arrived at the opposite conclusion, which is that developers would require the same (or more) effort to prevent and fix these bugs while using TypeScript. I think you've missed the fact that most type issues don't take that much effort in the first place. Once you set up the "guard rails" by making a decision about types (or just assigning a value to a variable), writing the rest of the code is pretty effortless.
I would take it a step further and say that TypeScript development is actually more productive than plain JavaScript. (Again because of the better autocompletion and instant type checking in my editor.) I make far fewer mistakes, but I also write code faster.
There's a hidden tradeoff. For example, if you have to spend 1000 extra cumulative hours to use types, and you've spent an extra 500 cumulative hours fixing all of the bugs associated with not using types, then, from a business perspective, it would have been worth it to not use types (and vice-versa).
It depends entirely on the bugs. If a bug's reproduction-path is rare enough that a non-critical mass of users runs into it, then it's not going to make a difference. Reddit, for example, is currently bogged down with login bugs, redirect bugs, redesign bugs, etc. and they're simply not prevalent enough to get people to mass exodus off of Reddit. Reddit will continue to grow fast even with these bugs.
The vast majority of user-seen bugs are not common. The common ones are usually caught with testing.
I use to believe that to always be the case. But then I worked for a company that didn’t really care about quality they just wanted to add features to mark off a checklist in a pursuit to either get more funding or get acquired
Right, but when customers are putting up with a crappy product, they are just waiting for a better alternative to come along. That's a huge liability to be sitting on -- just ask MySpace.
That actually proves my point. The original founder sold to News Corp just as Facebook was taking over in 2009. The strategy was a success depending on where you stand....
Our difference of opinion hinges on the ambiguity of “success”. Here you use “success” strictly in the sense of making a financial gain, while MySpace’s users would have called it a failure.
Isn’t that the metric that all investor funded startups use - either being acquired or going public at some multiple of their original investment?
Unless I’m working at a job where I’m serving some greater good, my definition of success is did I help achieve the company’s objectives, get paid well, and will I be able to use my coworkers and management as references in the future.
Your position is that financial gain is the only metric that matters and that the quality or longevity of the product is irrelevant, and my position is that that's a concise summary of everything that's wrong with "tech". We agree to disagree.
How much code that is run by most user facing businesses Is the same code that was used 10 years ago no matter how good it is?
No matter how great my Perl code was running 15 years and 5 companies ago, I doubt they are still using it.
The code that runs Amazon.com, Google.com, etc. is nothing like what they ran 10 years ago.
Besides, we aren’t talking about feeding starving children here. We are talking about the death of a social media platform. But more generically, if yet another software as a service CRUD app dies, whose hurt besides the investors who expect most of their startups to fail and the developers who can walk down the street and get another job in a week?
As an individual, why am I going to fight the loosing battle of trying to implement great code quality and test coverage when I’m being incentivized by how many features my team can pump out as cheaply as possible by hiring a bunch of poorly paid developers overseas?
We call what we do “engineering” but there is a world of difference. If my program fails that is running a SAAS app, the user gets a 500 error. If a mechanical engineer rushes, people die.
IMO, dynamically typed languages made more sense when type systems in mainstream statically language languages were much more primitive, and that limited what you could do - and it was often verbose when you could.
But the kind of type systems your average dev has to deal with improved a lot in their expressiveness since 90s. Type inference is basically mainstream, too. There are still things that cannot be described nicely, but they tend to be a lot more "magic" - the kind of stuff that lets you write code faster, but not necessarily write more readable and understandable code.
Like everything else, there are pros and cons. It’s not like dynamic languages are worse or better per-se, but they surely make it harder to scale a team or a codebase 10x the original size without introducing what effectively is some sort of typed system (or strict “typed” conventions that the team agrees upon) in place. Which may be done right, or disastrously wrong. A typed language generally doesn’t leave this step up for interpretation later on, since everybody agrees on it since day one.
As a newbie to clojure, I’m very curious to see what a large codebase looks and feels like. The clojure attitude seems to be that types come at the cost of reduced code reuse, which I tend to agree with. After a decade spent in strongly-typed OO, it is starting to feel like a bunch of sound and fury which seems to end up eventually just getting in my way. I’m definitely open to new ideas in this area, and am looking forward to taking a deeper dive into clojure this year.
Absolutely not, but horses for courses.
I use all of these dynamically typed languages regularly and I enjoy programming in them too, but choosing the right tool for the job should take precedence over personal preferences.
So then you are saying every single use case which node.js currently fulfills should use statically typed languages? This does not match my experience of node.js being used in many of the same cases as other dynamically typed languages. What cases specifically are dynamically typed languages suited for?
This is probably a side effect of the type system preventing bugs involving js's terrible type coercion system. I doubt that Python would have the same level of bugs preventable by typing, even though it is similarly untyped by default.
Python is actually strongly typed and dynamically typed, not untyped. Even Javascript is not untyped -- it is weakly typed. (not that any of these concepts have watertight definitions, but there's a general notion of what they mean)
That said, having used a statically typed language for large scale development, I've come to prefer for static typing even as a Pythonista. Mypy is a partial solution for this -- we'll see what the uptake on that is.
Even if it doesn’t help to find bugs, static typing is still worth it. I’ve been fixing some stuff in an existing Python codebase in the last weeks. Most of the time, when I see a new function, I have no idea what’s the type of its parameters...
I love that TS has 'any' type, because it really allows for some flex in development when you 'haven't figured the dam type out yet' - OR - you're dealing with untyped JSON-ish data.
A completed system has 'strong typing'. But a system is always in development. You're trying to do A, but in the meantime, B, C and D need types! And you didn't even know C and D needed to exist. Oops!
So you can tweak tsc to be a little lose with the facts and then as the module matures, tighten it up.
It's actually kind of interesting that TSC is one of the few 'compilers' (yea I know it's a transpiler) that actually gives you that amount of power over things.
I use TS with transpileOnly flag (compile even if type errors).
This is great, because I can prototype fast even if thing's don't yet typecheck and when I have idea what I'm trying to a chieve I can document all code and polish rough cases without leaving the editor.
I like to think of python as a language that’s great for responsible adults.
If everyone is a good engineer and comments things and tests things you’re going to have a great time. But if you start getting bad practices they’re going to be multiplied.
Heh. I’d counterargue by asking how many times you’ve heard of someone joining a new job and looking at the code and saying, “Ah, this is clearly written by responsible adults!”
> I like to think of python as a language that’s great for responsible adults.
Not a disagreement, but an observation : verbatim the exact same thing has been said about Ruby, and - I imagine - every other widely used dynamically typed language on the planet.
IMO, maintainability is a well understood weakness of dynamic typing - hence every responsible Ruby/Python/JS/Perl codebase including a mountain of unit tests that do nothing but cover typing (in addition to the rest of the tests that would be present irrespective of implementation language)
At the end of the day there's no silver bullet : either we let the compiler handle type checking, or we cover it ourselves with unit tests. Doing neither isn't really much of an option, but it is probably where a lot of these type-related bugs come from.
You also have to unfailingly keep comments up to date. Our org struggles a lot with this, but I wouldn’t say we are irresponsible apart from the fact that we’re using a language that makes humans do error-prone tasks that are already solved by automation. :)
> Most of the time, when I see a new function, I have no idea what’s the type of its parameters...
For that reason I usually use types for defining utility functions that get re-used in multiple places throughout the app. For everything else I just name the variables according to what the thing is, e.g.:
user_model, user_model_list, user_id_set, user_id_to_user_model_dict, etc. If you have a good style guide, there really should only be one allowable name for any given variable, and it should be immediately clear what type each variable is from the suffix on the name.
More readable. When declaring a type you only see the type when the variable is declared or when it's passed into a function, whereas when putting the type in the name it's obvious in each line where it's used so you don't need to waste working memory on remembering what things are.
If you’re using a new enough version of Python 3 then you can add type annotations now. Not great by any means but it’s made working with a large-ish python codebase tolerable (together with PyCharm).
Any bug can be caught by the typesystem providing the ability to encode your intention. For example in Rust writing a closed fd could be a type error, not catching all the "exceptions" in OCaml could be a type error, writing out of an array's bounds in Ada could be a type error etc.
Same goes for most other types. JS is generally happy to convert everything to everything, often via strings. So even utter nonsense, like multiplying a function by an array, still works (the result is NaN). Conversely, in Python, such implicit conversions are rarely in the play - the one you see most is int/float.
Even if that statement were true (which I highly doubt), it is still deceptive because it does not account for:
- The 2x to 3x extra time that it will take to complete projects using TypeScript instead of JavaScript. This is time which could have been spent on writing more tests.
- The new bugs which TypeScript introduces that JavaScript did not have related to things like source mapping issues, outdated DefinitelyTyped definitions, TypeScript version mismatches, the illusion of runtime type safely when dealing with parsed JSON data from remote clients, etc... I could talk at lengths about the problems but there are so many that it would take me about an hour to write them down.
I have over 1 year of experience with TypeScript in multiple projects and earlier in my career I used ActionScript 3.0 (also typed ECMAScript) for over 3 years so unlike the people who make the decision to move to TypeScript, I know what I'm saying. It's going to kill collaboration between projects. I really hope that browser vendors are smart enough to wait for all the hype to die before adding native TypeScript runtime in browsers. I'm certain that the hype will die when people realize that TS is just the same as Java.
During the most difficult time in their existence, companies like AirBnb could rely on JavaScript to build high quality software. I think that in reality this move to TypeScript has nothing to do with software quality and everything to do with keeping their employees busy and degrading their skill level so that they will not be able to join a competing startup in the future. This is just a strategy to turn their employees into corporate cattle and keep them locked into some kind of internal AirBnb private module ecosystem which will be completely disjoint from any universal module ecosystem.
Developers will use less and less open source modules and more and more private company modules to do their work. The reason is because the company's type system will eventually conflict with the type systems of all third party open source modules.
> The 2x to 3x extra time that it will take to complete projects using TypeScript instead of JavaScript.
I think you are doing something wrong. I switched over to TypeScript for my personal projects 2 years ago and never experienced that. If anything there are a few rare cases where I run into problems up front due to type conflicts, but really each of these cases likely would have resulted in expected maintenance down the road. Problems down the road were pushed much forward. The increase of effort was tiny, and its possible to really say, but this could have been a cost savings over the costly concerns of debugging an embedded problem.
> I have over 1 year of experience with TypeScript in multiple projects and earlier in my career I used ActionScript 3.0 (also typed ECMAScript) for over 3 years so unlike the people who make the decision to move to TypeScript, I know what I'm saying.
That isn't very much experience for the claims you are making.
>The 2x to 3x extra time that it will take to complete projects using TypeScript instead of JavaScript. This is time which could have been spent on writing more tests.
What's the reason to write tests for things, that just should works? You shouldn't think about tests like "What error does it throw if I pass that type?" With type checking you just won't have that type of issues.
>> "What error does it throw if I pass that type?" With type checking you just won't have that type of issues.
That's incorrect. If the object was parsed from JSON (e.g. sent to the server by a remote client), then you still need to do a type/schema check with TypeScript because types are only checked at compile time; not at runtime.
If you have good integration tests, you don't need to explicitly check for types in JavaScript; type incompatibilities will show up as logic errors in the tests. For example, if you add 100 with '23', you will get '10023' - You don't need to use `typeof ... === 'number'` in order for your test suite to pick this up as a logic error.
Even if your test case is written in TypeScript, you still need to account for the possibility that the front end will pass the value as a string; so the JavaScript test is the same size as the TypeScript one.
Just stop using it. Use gRPC. It's binary -> faster. It's support TypeScript -> type safely. Type safety is a complex thing. If there is a type-safe instrument instead of type-unsafe, you should use it.
>If you have good integration tests, you don't need to explicitly check for types in JavaScript
It works only in the perfect world, where tests are always relevant and tests all cases. But we live in the imperfect world. With people, which may forget to update tests/documentation/etc. and things in something like JS still will works.
But in the type-safe world things are different. If you forgot to add or deleted some fields - it'll break. If you pass wrong type - it'll break. Type is a test, that always actual.
JSON is the best, simplest, most robust, most readable, most flexible data interchange format ever invented and fastest to integrate with external systems. No way I would give it up for Protocol Buffers - That's like going back to the old SOAP protocols.
Have you tried integrating with an external service via a SOAP API? It would be quicker to rewrite the entire service from scratch than to integrate with it.
Also, I'm not a big fan of gRPC; their decision to use HTTP2 instead of WebSockets was a huge mistake. HTTP was designed to transfer hypertext documents; RPCs are meant to transfer raw data; not hypertext documents. In a way, gRPC makes the same mistake as TypeScript; instead of fixing the problem at the root, they take an already complex solution which works well as a base and then they add more complexity on top in an effort to make it behave in a simpler way than the underlying technology.
You can't make something simpler by adding more complexity on top. You just can't.
You can't call something simple, if you can't to determine how it's working at each step. In JS all you can say is just "with that sort of data looks like it works well, but with others it can throw error or has undefined behavior". With types it works or not works. There is no type issues, only algorithm issues. Of course it's simpler.
There are trade-offs, for example if you pass "3" should it also work when passing in "three" !? By limiting possible errors/inputs, you also make it harder to use.
It's simpler to use when you see interface. Otherwise you should read all function code to find out how it works and what kind of data you can pass into it. Of course you can say, that there is should be comments/documentation, but your code should be the best documentation itself.
Second that, the tests take time to write and they come with cost of development and also with cost of support in long-term. So having less of this kind of tests should be time-wise and economically-wise benefitial.
>> I really don't think you were using TypeScript correctly from everything you've written
I've worked in multiple high-performing teams and have only received positive feedback for my TypeScript code so I'm pretty sure that I'm using it correctly. I don't like it, but that doesn't mean I'm not good at it.
It's not in my personal interest to say bad things about TypeScript. I'm just sharing my honest unbiased experience.
To me the biggest benefit of static types, type hints, annotations, etc. is their documentation value. I don't have a particularly strong opinion on the issue of static typing generally, but if you don't use types, you damn well better document what your functions expect to receive and what they produce.
It is so frustrating to jump into a codebase and have to read every single line to figure out what the inputs and outputs are. Think about how much time is wasted. It's like an anti-multiplier. One engineer decides they don't want to take a few minutes to annotate types or write a comment, and now every other engineer on the team has to waste time figuring out what's happening. It makes me want to scream.
It's funny how TypeScript proponents seem to acknowledge that Java is bad and yet they consistently fail to point out in what way it's inferior to TypeScript.
I've worked with TypeScript for over a year and before that, I worked with Java for about 2 years at university and if anything, Java is superior to TypeScript because it has a much more consistent type ecosystem. Pretty much every Java project uses the same URL class to represent URLs for example. TypeScript has no consistency between libraries and projects - This will be TypeScript's demise; once developers realize how empty and meaningless supposedly universal type names are as an abstraction, they'll be begging to go back to JavaScript.
Enforcing a type system which is consistent between all projects is as pretentious as using global variables and yet that is the only way forward for TypeScript as an ecosystem.
For me the single best feature of typescript is that it's optional.
It has the rapid prototype capability that I love in Python and JavaScript. But once you're past the experimentation phase, you can make your measurements, write your more detailed design, do a little refactoring, and then "lock things into place" with typing.
I haven't used Java in a while, but how easy is it to write a working Java application without really knowing what you're building yet? (And I know people will say you're never supposed to do this, but I've found that practical experimentation and working examples is a very powerful part of the design loop)
The type system is like a chain. It's only as strong as its weakest link. A permissive type system which allows for an 'any' type and lets functions specify multiple possible return types is about as useful as no type system at all.
At the other extreme, a rigid type system is not good either because it makes it harder to integrate your system/module with third party systems/modules developed by other teams/companies. This is because other teams might be using completely different programming languages and their type system is extremely unlikely to be compatible with yours - This means that if they want to integrate with your code, they need to do a lot of work to convert their own types to match yours and the result just looks ugly; their project ends up with redundant types/interfaces that sound and behave almost the same but are different in ways that are almost impossible to describe let alone understand.
For example, let's say that I import two modules; both of them expose their own `URL` interface. Now both modules chose the same name for interface but one of them exposes a `query` property as a string, but the other one exposes a `query` property as a Map. How do you reconcile these two conflicting types within your system? The result just looks ugly and confusing no matter what you do.
Do TS folks really attack Java specifically? Why? That doesn't make much sense to me. By and large when I hear TS aficionados speak they compare TS to vanilla JS and don't mention Java at all.
They use this excuse to try to convert experienced Java developers who've seen the light after switching to JavaScript and who are desperate to not go back to the static typing hell hole that they came from.
I'm one of these people (2 years of Java at university).
The thing is that I know exactly why so many developers like statically typed languages. I've been there in 2006 when I switched from ActionScript 2 to ActionScript 3; I remember clearly the following 2 years of ActionScript 3 when I really thought I loved static types.
Unfortunately, I also remember the years after switching back to JavaScript and the huge productivity increase that followed. The problem was not dynamic types, it was my lack of experience.
That's why I cannot go back. I'm using TypeScript at work now and although I really love the company mission and I feel that I'm producing really great work, I don't think that I can stay there much longer because TypeScript slows me down and makes me hate coding.
It's also possible that even those same bugs could have been prevented by hiring more experienced programmers. Or using non-static verification tools or testing processes or many other tools in the toolbelt. Static typing is an ideology, almost a religion[1]. Not to say that religions aren't bad. They can keep a whole group of people under control. But I really don't like when people say, "See! A static type would have fixed this issue." and then from there conclude that static typing is a good idea.
[1] This might piss you off, but it's objectively true. Here's how you know. Because static typing could be easily left as an optional choice throughout the code base but no mainstream statically type PL I know of leaves it optional. That's the definition of an ideology: it says this tool/thing/what-have-you is the right thing to employ for all contexts at all times, regardless of what your own judgement might conclude in the moment.
Dynamic typing is also an ideology by your definition. The ideology is that you must never add static types to your source code.
> Because static typing could be easily left as an optional choice throughout the code base but no mainstream statically type PL I know of leaves it optional.
I thought TypeScript, Flow, and Python 3 (MyPy) all support optional static typing. Plus, most languages have an escape hatch to treat an object as untyped or as a general type.
>"See! A static type would have fixed this issue." and then from there conclude that static typing is a good idea.
Why is that not a good conclusion? In the context of the article, static types would have prevented 38% of bugs at Airbnb.
> Because static typing could be easily left as an optional choice throughout the code base but no mainstream statically type PL I know of leaves it optional.
It is actually not easy to design and implement a statically typed language that gives you all of the benefits of types when you want them while still allowing you to ignore them in some regions of your program.
The gradual typing folks poured years of time into this and still haven't figured it out. Optional typing is more workable, but sacrifices many of the benefits (soundness, performance) of static types even in code that is fully typed.
You can have static types with type inference. I would actually prefer static typing if I did not have to annotate everything. But there are convenience trade-offs like not having to cast when you actually mean to concatenate the number with a string.
No, inference doesn't have anything to do with it. Inference is easy.
The challenge is around the runtime representation of types. Almost all widely used statically typed languages maintain soundness by deferring some type checks to runtime. C++ has dynamic_cast. Java has ClassCastException and ArrayStoreException. C# has something similar.
When you add gradual typing where values and functions from untyped code can flow into regions of typed code, you need even more runtime validation to ensure you don't get uncaught type errors in your typed code.
The difficult then is figuring out how to represent those types efficiently, what types to reify, etc. It's really really difficult.
Most people talk about catching errors, but I think the the biggest benefit with static types is performance - as it makes the code easier to optimize. With TypeScript you do not get that benefit. Maybe we will see a new variant of "use asm" for JavaScript, but more human friendly. I really wish TypeScript would have used type inference instead of annotations, and only have a type-checker, and not transpile. JavaScript is already strongly typed. And a completely static type system is a pipe dream, as anything connected to the real world have to deal with chaos.
Types are definitely an ideology of sorts, but that’s not a good argument: gradual typing is complicated and creates a lot of challenges for e.g., compiling fast code.
You spend more time reading code than writing code. Something like 80/20; forgive me since I cannot point to a study. Sure more types will mean _more code_ but isn't it nice that code is now explicitly typed; for you to read and understand more?
I used to be a proponent for type annotations in JavaScript. But figured out the reason was because I took C# code and translated it to JavaScript, which surprisingly works well, all you have to do is to remove the type annotations. But without the type annotations I could no longer understand what the code did. To make the code understandable in JavaScript I had to rewrite it, and name things properly. Example:
Yeah as a python guy myself for years and years I still like languages that are typed. I don’t understand why people hate to define them and rely on constructs like the compiler just to save some keystrokes. I think explicit types make code easier to read and reason about.
I'm a Python guy too, and what I like about Python is that I can choose between using it dynamically typed or statically typed. When I start a new script/app, I usually start without type annotations and as soon as the data-model becomes clear, I add type annotations and static type checking via 'typing' and 'mypy'. This is a worklflow I enjoy a lot and I'm very productive with.
I think the significant difference between those two statements is that static typing is concretely achievable for relatively cheap effort (i.e. use a language with static typing), but "better tests" is something you will never finish doing. Even if you have 100% code branch coverage, your tests don't necessarily cover all the combinatorial cases that could cause bugs, so you could continue to write "better tests" to prevent further bugs. If the code is non-trivial, the combinatorial explosion is so big that you could probably write 100 times as much test code as implementation code, and you will still have bugs that could be solved by "better tests".
you battle combinatorial explosion by making a risk analysis on the code.
you split your code in pieces and unit test those. with unit-tests usually you can get full line + branch coverage, easily and quickly.
then you go one level up with the integration tests which these you test highly critical paths where the unittests work together.
and finally a black-box e2e test to make sure the pieces work regardless of the implementation details.
like tests - types come with a cost.
they are not free and the category of errors they catch are quite simply.
also, in my experience, people that depend on types usually skip writing automated tests go directly to manual testing skipping the rest of the testing pyramid.
The typescript fever smells like most of the other herd stampedes over the past several years. Check out this talk from a few years ago by Eric Elliot "Static Types are Overrated". He goes over how types don't mean fewer bugs, and if you're getting a lot of type-related problems it's probably got more to do with your testing hygiene. He also just wrote a summary of 2+ years using TypeScript and why it turned out to be more of a liability than a benefit.
But the fashion in programming languages (Scala, TypeScript, Python) to write the type behind the variable name has a disadvantage: When writing code an IDE cannot suggest a variable name based on the type. This is an inconvenience one will notice immediately when you are used to it from programming in Java, where types are defined before the variable name.
Sure -- but the majority of that static analysis (in the absence of type annotations) is inferring types and then running everything through the type-checker, right?
38% seems insanely high. Other estimates I've seen suggest more like 10%. If a code base has an unusually large amount of bugs caught by types it feels like it is missing basics like decent code review or unit tests.
* I am a huge TS fan I just don't think 38% of bugs a well written JS app should be type based
Each language was created as a tool to solve some specific sets of problems. Saying - this is the only and the best one is a receipe for the disaster and a sign of emotional attachement.
Use a proper language for a job be it c, java or python. Don’t try to do silly things, like using JavaScript for all the backend
> Use a proper language for a job be it c, java or python. Don’t try to do silly things, like using JavaScript for all the backend
But all the cool kids are using JS these days! Why would one choose a solid language that doesn't break this easily, when you can just use the cool stuff?
We use Purescript at work. I don't remember the last time anyone on the team needed to fix a bug in the Purescript code. Our team had to fix around 4-5 bugs in the JS code in the past week alone.
The Purescript code is really well designed with proper usage of types everywhere possible.
Refactoring is done in Purescript fearlessly, if it compiles it'll run correctly or there's a bug in the JS or backend code that sends wrong data to it.
I really wonder how the ecosystem of frontend would be if Purescript became a widely adopted language
That's why I hate JS
You need 10-12 tools and frameworks to do something and then you are full of bugs everywhere because of the poor designed language.
Large systems should rely on industry-proof languages (derived/based on C) to be safe.
Not a casa that there are approved coding standards that work (SERT for instance)
The "good part" of Rust compared to its closest alternatives is the machine model, which is straight C. You can even directly transpile C into Unsafe Rust.
The reason why I love JS is that you do not need any frameworks, only the runtime. All you need is a text editor and an API reference (Node.JS, Web browser). All JS code is open source, so you can look at the code of the libraries you are using to see how it does stuff and learn a lot from it.
The arguments in that threat aren't very good. Everybody makes mistakes[0], types or not, and a type system doesn't prevent all errors. Every system that prevents errors comes with its own overhead.
I think there are situations where explicit types are good, and situations where they're unnecessary or get in the way.
Java's use of generics is a good example of the latter:
ArrayList<String> list = new ArrayList<String>();
One of those just shouldn't be there. (Edit: Java 10 apparently added type inference, which means this is not necessary anymore.)
But in function parameters, it's incredibly useful to be able to ensure that the parameters are of the type you expect (and preferably not null, which most languages don't offer). That prevents a lot of unnecessary boilerplate checking whether you actually have the right type in your hands.
Basically I don't need explicit types for stuff I declare locally, because I can see in the code what type it is. But for parameters and other data I get from other parts of the system (ajax calls, for example), enforced types are incredibly useful.
[0] Except Donald Knuth, who apparently cannot make mistakes.
The overhead is typically “a few extra keystrokes”, Java and C++ notwithstanding. There are more complex cases in which standard type systems aren’t expressive and expressive type systems are arguably too difficult, but even then you could use an “untyped” type like interface{} in Go and program that bit just like you would in Python while still enjoying type safety in the other 99% of your application.
Ooh, nice. That sinks that part of my argument, I guess.
Still, it doesn't change the fact that some type systems are cumbersome in some places, yet valuable in other places. I think we're currently seeing a movement towards type systems that do more for you while getting in the way less, and that's certainly useful.
I think most modern(ized) languages have reached a fairly good compromise. Full inference is typically infeasible or too expensive unless you stick to the ML family and Haskell. And even then, I believe you want explicit type signatures for self-documenting purposes, also to avoid unwanted generalization.
Exactly. Explicit function signatures are very valuable. Having the system check that the object you get fits your expectations is useful. There are other situations where I could do without them.
I do. I thought Java didn't have it, but apparently starting with Java 10, it does.
Type inference isn't always good, though. Scala's type system can be quite horrible in places, although I think the idea is generally good, particularly for local variables.
And this is ok. A shift away from strongly typed languages encourages laziness. A shift towards resilient micro services accommodates laziness. It’s good enough to get by and allow large companies like airbnb to grow. Clean code vs time to market is a debate that’ll never end.
One thing I've always liked about typing systems is the compiler basically shaming me with "do you really know what you're doing". also small refractors are more graceful
Well sure, but what are the corrective steps? 38% of bugs could have been caught sooner/prevented with types.
The question to ask: What percentage of the 38% would have been avoided?
Similar problems have been solved in the past: what percentage of index out of bounds errors were caught because the concept was adopted by the language?
Can you give more context. What language are you talking about ? What if I spent five minutes extra looking at the commit, would that have spotted the bug ? In order to give a fair conclusion, you should also look at commits without any bugs, like a blind test to reduce bias, and measure false-positives (errors that are not bugs). Maybe also use modern tools like inference engines that did not exist back when the bug was made. And compare with other techniques like unit testing to see which one is more effective, and to see if there are any synergistic effects.
But I would say documentation part is more important.
This need for typesystem grows with longer feedback loop. You can maybe iterate on react component in playground fast without types, but it's expensive to crash rockets to check if forgot about handling some case.
Yet they didn't have them. Fallible humans didn't see the missing coverage. If they had that hindsight, they could have just not written the bugs at all.
Whenever I hear someone promote static typing for the type of thing you'd use JavaScript for, my first thought is: they aren't testing [properly].
I think static typing unquestionably helps if you have a young and undisciplined team. There are a couple indicators that this is a problem for AirBnB. We know their switch from PG to MySQL was at least partially due to lack of discipline. They're also rely on strict linting, which in my experience, a well gelled well led team doesn't need.
UPDATE: My mistake. I confused AirBnB with Uber for the DB switch.
Tests and types are complementary. In fact, testing becomes much more useful when dealing with static types.
As a simple illustrative example, if a function argument is type-checked as a boolean then we can write two tests (one for `true`, one for `false`) which cover all possible executions (note that this is far stronger than merely code coverage).
This isn't ever possible in an untyped language, since arguments can have any value, and the number of possible values is countably infinite.
Tests and types can be complementary but this is not always the case. If you use a language with dependent types you can prove that your functions are correct, making tests redundant.
> Tests and types can be complementary but this is not always the case.
I never said it was?
> If you use a language with dependent types you can prove that your functions are correct
Yes, for certain values of "prove" and "correct". Even if we ignore the cost of figuring out the proofs, it can be tricky to encode our mental models into formal statements.
Automated tests are useful for checking the correctness of our implementation and our specification.
> making tests redundant
Not at all, in the same way that lorries don't make bicycles redundant. In fact, I like to use dependent type checkers to run my tests, e.g.
This way, automated tests are run as part of the compilation process; and the build will fail if any of the tests fail (we can also use fancier encodings than Bool, to get nicer error messages, etc.).
Agda if you want to write proofs, Idris if you want to write programs. There is also Coq which is more popular but I would not suggest it to a beginner.
(note: both Agda and Idris can be used for both proofs and programs, it's just that each have their own focus)
Whenever I hear this type of argument I assume it’s from someone in a stable shop with a clear mandate and low headcount. Hyper growth startups don’t allow for this kind of stability, tooling scales infinitely better than just training everyone to do the right thing, especially when the problems you face are rapidly evolving.
I don't think you're wrong, but if you believe (like I do) that tests are a time saver, even in the short term, once you have good tests, typing doesn't save 38% anymore..it saves, maybe 1%.
I'll add that, in my experience, in an untyped language, you don't need more or even different tests. Why? Because your in either world your boundaries are untrusted (and more than likely untyped) and then everything internal you can trust. User data, external systems, possibly internal systems, queues, storage..all that data is going to be varying degrees of unsafe and untyped. Those entry/exit points need to be fuzzed, unit and integration tested.
Again, I don't you're wrong, I just think you want/need the tests anyways (because typing isn't remotely enough).
One could imagine that a type system is actually a set of tests confirming that a function will only accept arguments that are typed according to the function's signature.
To confirm this in a dynamic language, one would need to pass every possible permutation of every type available to the language to the function under test, to confirm that a function of, say, (string, int, boolean) would only ever work in the expected manner if the first argument was a string, the second was an int, and the third was a boolean.
Have you tried passing every possible permutation of typed arguments to your functions in test cases? If you haven't, then your program is less tested than a similar program written in a typed language. In a typed language you get all of that for free.
Do you write tests for every single function call? Every single variable assignment?
You don't. Even 100% coverage isn't enough to map all the potential issues with incorrect types being passed around.
Types are a highly compact way of adding tests and documentation to your code that automatically integrates with everyone's IDEs and your CI pipeline. I don't know why you wouldn't use it if you think these things are useful.
"No thanks, I'd rather spend all my time writing these things I could write much faster using simple types."
example:
two functions:
"sub" and "add". both have type signatures (int,int) -> int
yet the types will not help you to proof you are using the functions correctly, and the types are also not helping that the output of these functions is always as promised by the documentation.
> They're also rely on strict linting, which in my experience, a well gelled well led team doesn't need.
This may be true of a _small_ well gelled well led team. But I work at Airbnb and we're too big to rely on just gelling with each other and having good leadership for consistency anymore. In my experience when you have hundreds of developers it ends up being more cost-effective to decide on some style rules and then have a linter check them and remind people when they accidentally stray. It also helps keep code reviews focused on content, not style. (This used to be a problem at Airbnb but as far as I can tell is mostly not anymore, because of linting.)
Linting and code formatting checks should be strict to help keep people writing to one standard. I don't think its a matter of engineer experience, its more related to 1) time wasted in code reviews because of trivial errors. 2) Scaling a team can be done far more quickly if you aren't having to explain what should be commented, what spacing to use etc.
I was looking for a link to add to my post....that I couldn't find it should have set off alarms. My mistake, I mixed up Uber with AirBnB. I updated my post.
Weirdly enough, JS is the best in my opinion precisely because it does so much to not be itself. The crazy iteration speed of the JS ecosystem means that people have done crazy things, but the good stuff has remained. Python relatively recently added typing[0], and Matz has mentioned that ruby has no plans to include types in the language proper (though people are working on it). Perl might be the closest in terms of features since it has thread support but...
If you enjoyed this hot take, here are a few others you might enjoy:
- "extensive" unit testing is a poor man's type checker
- developers that don't think specifying types is helpful are still early in their careers (those who suffered in the dungeons of Java get some slack)
[0]: https://docs.python.org/3/library/typing.html