I think it's quite interesting to see that originally node.js was presented as a bloat-free alternative to "enterprise languages" like Java, C# or even Python or Ruby. A lot of complexity was subsequently added in an ad-hoc way which has resulted in (for example) a package management system that's wildly out of control.
It's very popular of course, so I'm definitely not arguing that metric. However, the stuff that was originally called examples of tooling that exhibits unneeded bloat and complexity (Maven) is now reimplemented in Javascript, but poorly (npm).
Not everything that gets added is stuff that people reasonably need, either. If you cater to everyone's needs, then you'll end up with 10 solutions for the same problem, because every one of your users has their preferred one.
I think Javascript suffers from this quite a bit. ES6 "classes" should never have made it in, for example. Not only did they add an extra level of abstraction for beginners to learn, but the only reason for doing so was "my code doesn't look like it does in other languages"...
> but the only reason for doing so was "my code doesn't look like it does in other languages"...
As much as I beat the FP drum these days at work, I find the class syntax a much nicer way of organizing solutions to certain, pardon the pun, classes of problems.
Whether or not you find this to be semantic diabetes is a matter of taste, I suppose. I'm curious what, specifically, you find to be the major issue that makes you say they should have been left out.
There's very few things JS classes can do that ES6 modules/named exports along with closures returning plain objects can't do better, in term of code organization, isolation, and extensibility. For the 6 times a year where I need an actual class (it does happen!), I can write the prototype code.
The main issue with adding classes is that they're very, very complex if you want to make them useful. The initial version was pretty harmless, but it was also almost pointless. Now they need to back fill all of the missing features (eg: private fields), which brings in an enormous amount of complexity. Most of the time, if I need private fields, I can just use symbols (not quite private, but close), or I can do
function() {
let private = 123;
return {
// use private in functions here
}
}
It adds (there ARE things classes are better add) very little compared to the insane amount of work that has to be put in the language to get it all working. Decorators are in a similar boat, where many decorator usages can be expressed just as easily with a higher order function, so adding the extra syntax is just bloat.
The cost isn't worth the reward.
The biggest thing classes give us that is very difficult to replace in vanilla javascript is a semantic construct that is easy to statically analyzed. In the typed flavors (Flow, TypeScript), I can analyze the interface of plain objects, but not in vanilla JS. Part of why React using ES6 classes can be useful.
Thats a great benefit, but Im not sure it's worth the trouble.
> There's very few things JS classes can do that ES6 modules/named exports along with closures returning plain objects can't do better
That's a really good point. I was about to disagree with you but then I created a thought experiment.
Thought Experiment:
I wonder what the JS landscape would look like if ES6 Modules were introduced as part of ES5 about 8 years ago? I could definitely see how that would make classes fare less appealing if we already had a great module system (sure CJS existed but browser didn't support it).
Looking at the timeline of when these features were implemented in all major browsers:
* ES6 Class[0]: implemented 2.5 years ago
* ES6 Modules[1]: implemented 1 month ago
And ES6 classes were designed long before they were implemented, around the time people were making class hierarchies with backbone and AMD modules were the future.
The rise of Java and the OOP revolution isn't that far behind us (2 decades seems like a lot in the tech world, but its still within a single generation of humans).
>In the typed flavors (Flow, TypeScript), I can analyze the interface of plain objects, but not in vanilla JS.
I've found that developing JavaScript in an IDE that reads and validates type information from JSDoc allows me to introduce strong typing while maintaining the flexibility and simplicity of vanilla JavaScript without getting as bogged down as I get with Typescript.
For sure, though my comment referred more to analysing code for things like automatic transformations at scale (like codemods) than during development. Like, figuring out that a stateless function component is a component is hard.
You can do almost everything with jsdoc comments in flow and TS, of course. It's awesome.
I loved the classes at first, but as I've got more in to it I've found that classes and prototypes are redundant. Closures let me maintain all of the state that I need without the extra boilerplate.
Or, you know, you could actually learn the language you use. Prototypes are not nearly as bloated as classes, and you usually don't even need them. OO is not the one true paradigm of coding.
> I think Javascript suffers from this quite a bit. ES6 "classes" should never have made it in, for example.
Well, I'm really happy that the class-statement got in ES6 though.
Before that, whenever I needed something class-like, I had to search how do you do this in Javascript, and find five different answers, no but really which is the proper one for prototype-based inheritance, waste an hour (or more) and frankly I still don't know, there's just so many ways you can do it, which is the RIGHT one? and Javascript wraps in on itself in so many cool ways, but there were never any definitive answers, just more rabbit holes.
Now, there is the class-statement, and it's one less rabbit hole to get trapped in. I actually get more stuff done now that there is one right way to define a class, or class-like object with a constructor, properties, methods, etc.
Similar thing goes for the function arrow notation. Javascript, and the event-based environments it usually operates in, wants you to use anonymous functions a lot. But the relatively verbose way to define them, still held me back from using them freely as much as I wanted, trying to "optimize" them away if possible.
Yea, I don't use the 'class' keyword in ES6 at all (but then I keep to a fairly functional style in my JS). Modules and lambdas are the killer features in ES6 as far as I'm concerned.
Yes, because the loop never solves the original problem. It's about organization and bloat, not pure speed and leanness.
Rebuild from scratch but also recreate all the existing functionality in a much better standard library and finally the chain can be broken. But nobody wants to do that.
Yes, arguably the .NET Framework almost did it and is still one of the most productive frameworks available, but .NET Core has definitely improved things substantially. It's fast, well-designed, and full-featured and I expect usage to pick up greatly.
What I don't like in Microsoft's frameworks is that they've made lots of things multiple times in slightly different variations, like they always do with all their software (10 variations of each type of programs which were outdated before they were finished). Mostly it exists due to historical reasons, but it only underlines the problem of a multi-billion corporation having design skills of a sophomore. They redo and redo things, bloating their frameworks and increasing their number and you have to guess which CookieContainer you should use this time. It makes me understand why language designers like Rust developers insist on a small core library. Because it's better to have one separate library that will do everything regarding Cookie management (and you could control its functionality by including additional traits from it), than to have incompatible variations of it in the standard library and in each framework.
So far the package manager hell has been kept in check because they keep re-doing everything in such a way that you don't intermingle it. So when you're on MVC5 you're on MVC5, and when you're on AspNetCore, you're on that. You're not using 5 of library x and 6 of libarary Y. Likewise the startup and DI stuff has all fully rebooted twice in a few years. But nonetheless, some of that sort of package hell has already seeped in, where you're using different packages that depend on different versions of some underlying thing with breaking changes. I think the choices are either keep rebooting everything or stop making new stuff.
Yes, but it's rapidly getting better with .NET Standard combining all the libraries into a single definition that can be used on any framework implementation.
MVC5 was never released though, and the changes have been rather minimal from ASP.NET Core v1 to v2 with straightforward migration guides, so it might look messier than it actually is if you were working through all the previews and release candidates instead.
Nevertheless, Microsoft has a long history of having messy v1.0 with most of the stability coming after v2.0, so you can consider the foundation pretty stable now that it's on v2.1 and more.
.net core F# support has been pretty great from the initial stages of .net core from my basic usage. Biggest challenge seemed to be around type providers (F# system of generating strongly typed classes from dynamic data such as XML, CSVs, HTTP etc) but that's largely resolved. More info at https://github.com/fsprojects/FSharp.TypeProviders.SDK
C# is very verbose and tedious compared to more expressive languages - having to deal with CLR types/API at runtime while using a language with very limited expressiveness (C#) is not very productive. It's better than Java if that's what you're aiming at - but JVM has an incredible ecosystem of stuff that works - much larger than .NET core which is not very mature in many areas (recently had to revert to .NET 4.7 because some encryption method used by a government SOAP service we were talking to wasn't supported).
TypeScript and JS underneath is actually quite malleable - you can escape static typing at any point and revert to simple JS object model when things don't map cleanly in the type system - and then still have types at the boundaries - makes meta-programming trivial in some cases - where it would look like a monstrosity in C#.
F# is interesting and has a lot of advantages over C#, but few people seem to be willing to invest the time to pick it up in the .NET community.
So I don't really view .NET core as a superior alternative, I've worked in JVM land, they are more mature and while Java sucks there are other languages on top of it as well and are decent to use (Kotlin ~ C#, Scala ~ F#)
> using a language with _very_ limited expressiveness (C#) is not very productive.
o_0.
Think you need to check yourself mate.
I believe the productiveness of more "expressive" language tends to be undermined by the loss of productivity that occurs when you're compelled to write blog posts or comment on hacker news about how amazingly productive and expressive your language is.
I can do like 3-4 hours of productive work a day realistically - after that I lose focus. I can push this in some periods - but that's the ammount of time I limit myself to be functional over long term.
If I need to waste that time sifting trough boilerplate than I'm pretty upset because I get less shit done in that time window.
Chatting on forums is a casual brain teaser and keeping up to date on industry stuff.
I don't think "expressiveness" or "boilerplate" are the things that slow me down. I use Go, and I find that it is both expressive and a little verbose, but it's still very simple and there's usually one clear way to do things, so I find that I can move a fair bit faster than I can in C# _or_ Haskell (in the latter case, it might just be that Haskell has a huge learning curve and I'm nowhere near over it).
I would suggest using more personal language when expressing personal opinion and toning down the force (very).
> [I find that] using a language with limited expressiveness (C#) is not very productive for me.
Like, I'd figure you can be mad productive in any language (even COBOL?) although I'm only completely cosy in a couple. There's no need to be so dismissive of the tools that others use.
Yes Typescript (also from Microsoft) is fascinating and fantastic at combining the strengths of static typing while still maintaining all the flexibility of dynamic types if necessary, however it's pretty much the only realistic non-academic of such a thing, so basically everything else is pales in comparison if that's what you're looking for.
Why is C# not expressive? It has the DLR and `dynamic` keyword which behaves just like JS typing if that's what you want, because it seems like your issue is really with static typing in general. Functional languages are nice but it seems C# with functional its slowly and carefully integrated functional extensions is actually more productive for most developers.
Dynamic doesn't behave the same as JS typing, you're still using CLR object model and typing rules, you're just losing compile time checks - it gets complicated really fast if you want to do meta programming even with DLR and it's not really ergonomic in C# (like casting/boxing primitive types, etc.)
Think about AutoMapper and then compare it to a TS solution using spread operator. How much boilerplate automapper crap do you see in your typical enterprise C# project ?
And that's not even touching on functional features, like you can't even have top level functions in C#, it's "one class per file dogma" + multiple wasted boilperplate lines and scrolling. I recently rewrote a C# program to F# - didn't even modify much in terms of semantics (OK having discriminated unions and pattern matching was a huge win in one case), just by using higher level operators and grouping stuff - line count went down to 1/3 and was grouped in to logical modules. I could read one module as a unit and understand it in it's context instead of having to browse 20 definition files with random 5 line type definitions. I could achieve similar improvements by rewriting to TS or Python.
C# adds overhead all over the place, people are just so used to it they don't even see it as useless overhead but as inherent problems they need solve - like how many of the popular enterprise patterns are workarounds around language limitations ?
When I bring this up people just assume I'm lazy about writing code - but I don't really care about writing the code out - tools mostly generate the boilerplate anyway. Having to read trough that noise is such productivity drain because instead of focusing on the issue at hand I'm focusing on filtering out the bloat from the codebase.
This sounds like a personal preference for dynamic vs strongly typed.
I could rewrite your entire comment in reverse about how I find C# highly expressive and readable while dynamic languages or Kotlin (blech) are a mess of inconsistent whack-a-doodle experimentation.
But my opinion is useless.
The value in any platform is productivity and if any given team can be productive, it doesn't matter if it's COBOL, RPG-3, Pascal, BASIC, or a functional language like F# or plain old JavaScript.
Actually I like static typing, I mentioned I rewrote a project in F# in like 1/3 of the code from C# solution.
It's more that C# is static typing done poorly IMO - a relatively limited type system that adds overhead compared to dynamic languages or more expressive static languages.
I'm having a hard time understanding what's fascinating about typescript.
I agree it makes JS better. I agree it's a good tool for its purpose.
But "fascinating" ?
It's hardly the most elegant scripting language down there (Ruby, Python, Kotlin and Dart doesn't have to live with the JS legacy cruft).
It has a very small ecosystem outside of the web.
The syntax is quite verbose for scripting.
It has very few data structures (and an all-in-one one).
Very poor stdlib.
Still inherits of important JS warts like a schizophrenic "this".
Almost no runtime support if you don't transpile it (which means hard to debug and need specific tooling to build).
And it's by no mean the only scripting language having good support for typing (e.g: VSCode has great support for Python, including intellisens and type checking).
What's so fascinating about ?
What's fascinates me is that we are still stuck with a monopoly on JS for the most important platform in the world.
Typescript IS javascript, so of course it inherits all of its problems. The data structures and standard libraries are what you get from JS, nothing more. It's called a programming langauge but its more of an extension to JS with a powerful compiler.
The typing system is what is special though, especially in how seamless it is in adding strict types alongside pure dynamic objects, but also allowing you to choose pretty much anything in the middle of that spectrum depending on your definitions.
You can have a few strong-typed properties mixed with others in a generic type that inherits from something else but can only take a few certain shapes. It's unlikely you need all that in most programs but it's the fact that you can do it which makes it great. In fact, the Typescript type system is actually turing complete.
> Typescript IS javascript, so of course it inherits all of its problems. The data structures and standard libraries are what you get from JS, nothing more. It's called a programming langauge but its more of an extension to JS with a powerful compiler.
That's pretty much my point.
> he typing system is what is special though, especially in how seamless it is in adding strict types alongside pure dynamic objects, but also allowing you to choose pretty much anything in the middle of that spectrum depending on your definitions.
> You can have a few strong-typed properties mixed with others in a generic type that inherits from something else but can only take a few certain shapes. It's unlikely you need all that in most programs but it's the fact that you can do it which makes it great. In fact, the Typescript type system is actually turing complete.
Apparently you haven't read my comment because I clearly says it's not special. Others languages do it to.
Yep. There are some languages that start out trying to solve fundamental productivity issues in previous languages - some more than others.
I think we had a generation of ecosystems with Node, Ruby, Python, that tried to do solve the unapproachable systems around the Java/etc ecosystems and make them more open.
They succeeded, but the next generation seems to have been about solving the plethora of tools that came with those languages. Rust, Go, etc, having first-party tools are trying to improve upon that, and yes I think Rust is by far the best implementation I've seen.
I'm interested to see what the next generation is.
Rust is designed that way, to be fair. They expressly did not want to be batteries included like python is. The reasons are what they are and not particularly relevant to the conversation, but pulling in well designed third party crates is the point.
Speaking from a python user's perspective, the batteries included philosophy works great when you have a neutral implementation. Python does a good enough job, and provides extensibility in a way that I don't need to download a package to do basic things. On the other I have to spend hours trying to find a package in JS that just gets shit done. The third party package way is only required for ui parts because you don't want everything to look generic. But having a good standard library to do non user facing stuff is essential. That's why every node project ends up with a thousand dependencies. Because the language is not batteries included. There in JS there is no "one correct and obvious way to do everything" which makes doing basic programming painful.
This was a really funny experience for me as a self-taught guy going in the other direction. I started with Node in my spare time, and when I finally got a professional coding job my first project involved Java and Maven. I was kind of dreading it due to Java's reputation as this big bloaty terrible enterprise language, but once I actually got started I was like, "Man, this type safety thing and opinionated build tool thing etc etc are really nice." By no means is it (or any language) perfect, but a lot of the criticism suddenly seemed really overblown.
Currently happening with JSON (instead of XML, instead of C0RBA) despite the brutality of ripping out comments to keep it simple. We now have json schema, soon jslt, json namespaces etc.
To be fair, it's not impossible for some improvement to occur in this process.
Exactly! "originally X was presented as a bloat-free alternative to 'enterprise languages'"
For awhile, "Burn the Diskpacks!" was a battle cry of the Squeak Smalltalk community. That sort of policy fights bloat, but leaves old users in the lurch. I think that we are now to the point where a language/environment can trim bloat while not abandoning old users. If the language has enough annotation, and has the infrastructure for powerful syntactic transformation tools, then basic library revisions can be accompanied by automated source rewriting tools. We were pretty close to it in Smalltalk, without the annotations.
But we can learn from prior mistakes in each iteration and spring clean the software logic. I know it's a lot of effort to seemingly reinvent the wheel each time, but I like to think it does yield some benefit in terms of efficiency and cleaner logic.
Which really isn't so bad. Y eventually goes corporate, and is still presumably better than X, having learned from its mistakes. But for those who hate the new bloat, along comes Z and the cycle repeats itself. Chicka Chicka Boom Boom.
I think vi -> vim -> neovim shows a pretty good model.
Neovim is an effort to modernize and remove cruft from vim, so they get to keep all the good parts and throw out the backward compatibility. If it works out it can eventually replace vim, not to different to what vim did to vi.
I'd like to see similar stuff done to much of the GNU tools. Make for instance has to worry about backward compatibility and posix compliance that makes it hard to progress. As of today there have been about 12,000 attempts to replace it with something else and I find all of them inferior for one reason or another, they've all reinvented the wheel poorly. If someone had taken the fork and modernize approach we might have something better by now.
It doesn't even have to be a "hostile" fork. The same can be done by the developers of the existing tools.
I don't think the loop is necessarily bad, it shows progress.
Think about Java, it solved a class of problems that C was unable to address (e.g. unsafe memory, native threads). Thus enabling a new class of programs. But the new class of programs created opportunities for new platforms to solve with the benefit of a clean slate and fresh design having learned from past successes and failures.
I'm increasingly sketical. Maybe we move ahead a few inches each cycle, but it's starting to look distressingly like each generation of programmers has to learn all the lessons that their greybeard predecessors learned the hard way. Then, when they acheived some level of enlightenment, the next batch of bright-eyed whippersnappers comes along to rinse and repeat.
There's a disturbingly low-level of historical knowledge passed along in programming. Some bits and pieces are encountered in a quality Computer Science curriculum, but usually in rarefied, theoretical form, and inevitably balkanized into drips and drabs as part of subject-oriented coursework.
It's interesting to place today's techs on the Java maturation timeline - each became what they thought they hated but realized may have existed for some necessary reasons.
New platforms bring exciting and meaningful evolution often at the cost of what techs like .net and Java have a few decade advantage in. It's also interesting to see what Java devs are innovating with themselves, Scala, Kotlin both have good things happening.
Maybe using one large, inter-syntax friendly world like JVM will help.
When experience is overlooked for youth, we relearn and reimplement the same libraries repeatedly in every new tech to feed some developers needs to build temples to their greatness.
Still, Fitzgeralds quote comes to mind... "So we beat on, boats against the current, borne back ceaselessly into the past." and technology is held back by reinventing the wheel.
The biggest problem i see is the weird hole circa 2006 that arranges with Sun selling to Oracle, that kind of still-birthed Java as the next great language.
That hole I can credit as giving C# the advantage in that tight niche, and stilling the development of the JVM platform in general.
By the time that the rust on JVM improvements were dusted off, all initiative was lost. Java was playing catchup to the competition.
As we're seeing with WhatsApp, guardianship and supporting the direction of a project isn't easy. I'm not sure where Java would have ended up if someone else took it.
Additionally Oracle haters seem to forget Oracle was one of the very first companies to get into bet with Sun regarding Java, with their whole Java based terminals idea and porting all their Oracle Database GUIs to Swing.
I think you gotta have a good understanding of the domain and use cases you want to hit (which is really hard, especially so when it's a general purpose programming language whose domain is... everything), and design from the start with a vision of hitting those use cases, instead of having to shoe-horn them in later.
Of course, use cases will still evolve, and your initial understanding is always flawed, there's no magic bullet, designing general purpose software (or a language or platform!) meant to hit a wide swath of use cases flexibly is _hard_.
And then, yeah, like others have said, you need skilled, experienced, and strong leadership. You need someone (or a small group of people) who can say 'no' or 'not yet' or 'not like this' to features -- but also who can accurately assess what features _do_ need to be there to make the thing effective. And architects/designers who can design the lower levels of abstraction solidly to allow expected use cases to be built on top of them cleanly.
But yeah, no magic bullet, it's just _hard_.
As developer-consumers of software, we have to realize that _maturity_ is something to be valued, and a trade-off for immature but "clean" is not always the right trade-off -- and not to assume that the immature new shiny "clean" thing will _necessarily_ evolve to do everything you want and be able to stay that way. (On the other hand, just cause something is _old_ doesn't _always_ mean it's actually _mature_ or effective. Some old things do need to be put down). But resist "grass is always greener" evaluation that focuses on what you _don't_ like about the original thing (it's the pain points that come to our attention), forgetting to take into account what it is doing for you effectively.
Refactor and trim the bloat on the basic libraries, but have a policy where bulletproof automated source rewriting tools are provided in those cases. Perhaps this isn't possible with Javascript, but it might be possible with other languages.
If you think anyone has "bulletproof automated source rewriting tools" I've got a bridge to sell you.
I've used an excellent one. The Refactoring Browser parser engine in Smalltalk. I've used it to eliminate 2500 of 5000 lambdas used in an in-house ORM with zero errors -- all in a single change request. (Programmers were putting business logic into those lambdas.) Like any power tool, it's not stupid proof. However, it gives you the full syntactic descriptive power of the language. So if you can distinguish a code rewrite with 100% confidence according to syntactic rules, then you can automate that rewrite with 100% confidence.
Here's where it can go wrong: If your language is too large and complicated, there the probability you can run into a corner case that will trip you up. Also, it will always be possible for a given codebase to create something which is too hard to distinguish, even at runtime. (You can embed arbitrary code in a Refactoring Browser Rewrite transformation, so you can even make runtime determinations.)
"Bulletproof" isn't "invulnerable." A vest with an AR500 plate will stop certain bullets hitting you in certain places. It won't protect you from being stabbed in the eye or stepping on a landmine. Despite that, it is still a useful tool.
Watching the Javascript poorly reinvent the wheel has been very disappointing. Very simple mistakes like immutable, never changing build releases that Java developers understood 15 years ago are become recent front page news in this community. Ironically, even though all the code is open-source pre-existing knowledge does not get leveraged in the open source world. There's a kind of market failure at work here it seems; the lack of commercial selective pressure results in the flourishing of lots of poorly researched OS solutions.
This issue is not just a lack of learning from past failures, it's an active issue that is systemic to web development, especially node. Everyone wants reinvent the wheel rather than supporting a similar, already existing project. I don't know if it's that everyone wants to be the lead on something or if they all lack group skills, but there is no reason we need dozens of similar, partially functional libraries. I can barely, and I mean sooo very barely get behind the fact that all of these SaaS companies need to create their own versions of frameworks, but it amazes me just how many square wheels there are in the web community. It was one of the major reasons it took me so long to start doing full stack development, just way too many cooks all wanting to make almost the exact same meal, only theirs is more superior.
It’s because the average dev has less than 5 years of experience according to the StackOverlow survey, and web is the fastest growing field inside software engineering.
A large majority of people you’re chiding for not learning from others, don’t even realize those other things exist.
But it's not even independent green developers, it's everyone. Chai, mocha, jasmine, jest, should, expect, lab...omg do we really need another unit testing library? Sure they are all slightly different but there is no reason they all couldn't be condensed down to one or two libraries. Shall we list all the reactive UI frameworks? Or routing frameworks? Everyone is at fault here.
Chai is not a testing framework, it's an assertion library compatible with all the other main testing frameworks you mentioned. Yes, we do need experimentation and innovation in testing frameworks. Jest was a real innovation to the space and is particularly awesome for React testing with it's snapshots feature. This kind of argument never gets made with anything else. "Why can't we all just stick to the Model T. It's perfect."
I am all for experimentation and creating something new. But so many of these projects out there are not forks of current projects, they are complete rewrites. Is this because the new project is vastly different? Nope! That is the issue, they aren't extremely different.
The reason this is a problem is because web tech is constantly changing, to the point that so many of these projects end up in the scrap heap far faster than other tech. It causes problems with long term service due to compatibility issues with ever changing dependencies.
I have a feeling that JavaScript, and some other areas of open source, have a popularity contest problem - people building projects not because they're needed or useful, but for that brief moment of Internet fame.
It gets worse when instead of your CV, you get hired by startups based on your Internet fame, or wasting your private life building Github (sorry Gitlab) portfolio.
Half of those are assertions libraries, not unit testing libraries. What are you comparing this list to? What is the appropriate number of unit testing libraries a language should have? Do you scale that number for community size?
Ignorance is curable, but requires the cooperation and desire of those who lack to achieve the cure. From my vantage the world of software development seems filled with mediocre individuals who all think of themselves as the Jon Galt of software.
I like the energy around the javascript everywhere movement. So what if they reinvent the wheel, sometimes you find a better wheel and break the rules along the way.
There is something exciting about developers using a language in ways it was never designed. Then having the language change to support the changing ecosystem...
So true, at the end of the day people are working like this because it's the way they feel the most passionate about. You can't really blame them considering how disinterested people can get working on that last 10% of even their own projects.
haha, why do we do this? Should we judge programmers by how finished their thing is? I think it is the most impressive quality when I see it (which is non of my own stuff haha)
It's when you build a business on a technology and then have to re-invest to rebuild the product, that's when it becomes an issue. Think of all the start ups that built running businesses on Angular 1.
Sounds to me like the list has one item: “the npm registry once allowed users to delete packages”. [2] and [3] have nothing to do with immutability. None of them have to do with reinventing the wheel, either, unless you wanted Node to use Maven for package management?
They don't mean immutable as in language form, but immutable packaging system is the general form where you can add but not remove packages to it so as to not break things, which is the common form among most maven/cargo/hunter/etc.../etc... dependency packaging systems. It's generally considered that npm supporting deleting packages was a major mis-design, which became very public when a popular tiny package got deleted, which then broke so many things, so they learned that the hard way instead of learning from the systems that came before (obviously not cargo, but you get the drift ^.^).
This stuff isn't relevant at all to the talk - he never talks about npm or anything to do with package managers but instead how node does imports etc.
But anyways, [2] is at least a problem in many other package repositories. [1] would probably be a problem for many - given legal pressure (vendor your shit, that's the solution). [3] was a bug, not a design issue - no package management system is immune to bugs.
The one thing Java has is that it uses namespaces, which may help with [2] (but barely). [2] certainly has been a problem in PyPI.
Certainly all of this could happen to PyPI. We see it happen with js more, I think, because js happens to be extremely popular so there's a ton of packages for it and it's also much younger (especially node) than others.
> This stuff isn't relevant at all to the talk - he never talks about npm or anything to do with package managers but instead how node does imports etc.
He does have it in his slides.
Slide titled: "Regret: package.json", last 2 points:
> Ultimately I included NPM in the Node distribution, which much made it the defacto standard.
> It's unfortunate that there is a centralized (privately controlled even) repository for modules.
"The Wheel of Time turns, and Ages come and pass, leaving memories that become legend. Legend fades to myth, and even myth is long forgotten when the Age that gave it birth comes again." - R.Jordan
I'm afraid the current frantic pace of reinvention in JS/web might cause the Breaking of the World and throw us into a Third Age where no one quite remembers the true lessons of the Ages before.
More frustratingly, at least for me, is that some of us have been warning about these things for absolutely years without many paying any mind, only for them to keep in happening again. Eg. https://news.ycombinator.com/item?id=16090120
I've found that some people tend to take a ton of pride in the assumption that they have to make mistakes to learn from them, but you almost always want to learn from other peoples' mistakes first. Probably overlaps quite heavily with those people who desperately want to tread on new paths.
That's a good example of the Innovator's Dilemma: the enterprise incumbent is unseated by some "crappy" lightweight solution that is easier to get up to speed and solves enough of the problem. The complexity, accidental and essential, comes later.
Yes, this is exactly what's happening. Existing tools are seen as too complex because people don't seem to be realizing that the complexity is not accidental, but necessary.
I'm not trying to say that Java has no accidental complexity of course, I don't want to open that can of worms :)
Which complexity is actually necessary? Does it change when you have 400gbit, SSDs, and watchOSes? How about 1TB of core memory? If we aren't wrangling with handling 75 spinning disk's connected to a 10mbit network with 13" blurry CRT monitors, perhaps we don't need to discuss the finer points of engineering efficient client/server LOB applications? Perhaps we ought to discuss MDM, RF, ML, and lifestyle impacts.
This is all just the process of evolution at play. What seems obvious today wasn't yesterday - applies to biology, material science, medicine, engineering, art, music, architecture, design, taxi services, marketing, government, politics, and so why shouldn't it be so in computing?
Sidenote: I love the humility of this video. I remember the days when node was first unleashed. I could not have imagined how it has changed the way we all work. It all seemed so obvious from day one, and here we are today. What a brilliant contribution.
Well there is also a problem with people who don't want to say no, and don't want to stop working on their project when it is finished. Adding all the features that appease a different 1% of your user base is what leads to the bloat - and it still is bloat when 90% of your users will never use the functionality that externalizes all kinds of leaky abstractions and other costs onto them. Just because that bloat may happen to your successor as well doesn't make it right in either case.
Nevertheless, people who need a lightweight language are always able to find one because there is always a new language at that point in its lifecycle. Further, there are languages like Go which seem to be determined to remain easy to get up and running, and don't seem likely to change anytime soon (for better or worse).
This is pretty much a perfect example. That and dynamic languages, although dynamic languages happened because the happy medium of types is type inference, and previous popular typed languages were too verbose/inflexible.
We're definitely reinventing the wheel a lot, though.
I think the truth is even sadder. The new generations hear the complaints of the old. They hear, I hate spring, Java is so bloated, XML hell, etc. So they think damn I don't want to touch Java with a ten foot poll.
That's when they go instead with the newer system, that didn't exist long enough to have accumulated criticism. Which is backed by enthusiasts still in the honeymoon period.
When an authority figure says something, listeners are more likely to accept it, even if it is wrong. That's just human nature. So authority figures have an extra duty to think about the effects of what they're saying.
They owe their success to these people and so the way that they can pay it back is by using their voice as a tool for improving things.
I think the problem with "bloat-free" is it's a fine ideal until you try to solve any kind of reasonably complex problem, and honestly it starts to creep in even when you're solving something that isn't particularly complex.
Here's a concrete example. Your classic node.js or express.js sample app is something fairly simple like a hello world, or an IM server. A more complex sample probably looks something like that venerable nodecellar app from a few years back. In all cases the spiel is, "Hey, look how easy it is to create a web server with node."
Except that I'm looking at my node server source right now - for an honestly fairly simple app containing a handful of pages and a blog - and here's what I have:
- Routing (obviously)
- Cookie and body parsing
- Session management
- MongoDB integration
- Passport.js for authentication with a couple of providers (FB and Twitter)
- File system access
- HTTPS and SPDY/HTTP2 support
- Compression support
- Logging with winston and morgan, including loggly integration
- Referer spam filtering
- Pug templates
- Hexo blog integration
- Path resolution support
- Request validation and redirects
- Static content support
- Stripping headers such as X-Powered-By, and adding other headers such as the all-important X-Clacks-Overhead
- Error handling
There's probably a couple of other items I missed, but you get the idea. It seems like a lot but, as far as I'm concerned, this is express.js app MVP for anything you might want to put into production.
I haven't even mentioned the gulpfile I use to build all this, which targets desktop, mobile, along with embedded versions for a particular mobile app due to launch in the near future, and has become something of a behemoth[1]. Nor have I mentioned that I have Cloudflare to sit in front of this, primarily to deal with the heavy lifting of some of the media files I serve up.
On the face of it, this might feel like "bloat" but it's all necessary to run the application and, like I say, a lot of it is the bare minimum for an MVP web app in node.
[1] Yes, I know I could/should switch to webpack, but gulp works, and switching to webpack "just because" doesn't justify itself with the value it might add.
IMHO, the point of using a no-frills library/framework is because you want you write the rest of it yourself. The advantage of this is that it meets your requirements exactly and is therefore smaller/less complex.
When doing a project that takes only a few weeks, I would probably choose a framework that has everything in the box. But if you are building something that is going to be developed over a period of years, the reduction in complexity achieved by building your own can be life saving.
When I look at your list, most of the things fall into the categories of "Pretty easy to implement" or "Don't want at all". However, there is an advantage for not reinventing the wheel if there is no reason to do so. If there is a nice library that gives me what I want and doesn't impose itself too much on the design, I will use it. But the main advantage for not baking it into a big framework is that I can pick and choose what I want.
As an older programmer, I come from an era where libraries and frameworks cost a lot of money. We built stuff by hand because there were not a lot of other choices. These days, though, virtually every library and framework is free software (not only free of charge, but you get source code too!) It's like living in Candy Land, and I'm not about to complain about it :-) However, I think that programmers today reach too quickly for the pre-built and do not understand the long term advantages of bespoke development. Like most things, there is a balance to be maintained.
Unless you're using something that has competent dead code elimination, like Google Closure Compiler. Then you can go ahead and include 30mb of libs, knowing that only a fraction will actually be shipped in the production version.
I don't come from that era but couldn't agree more, every time one of my coworkers suggest using a library I tell them that is ok as long as they maintain it, I prefer to spend my time coding and like to understand as much of the codebase as I can, instead of having to learn and maintain tens of external libraries. Specially if we only need a couple of functions from that library.
I normally lose those debates though, and the thing reaches a point where the complexity of the code makes it impenetrable.
This why I love Java so much. Take a look at what Spring Boot does for me:
- Routing is done in two lines of code:
@Controller
@RequestMapping("/myroute")
- Cookie and body parsing - no need to write any code to do that, I just have method parameters and all of the data flies in. Whant a validation? Only one keyword on a method parameter - @Valid. Custom validators are supported as well.
- Session management. It just here for me and does the right thing by default. I can replace storages with custom implementations but by default no code is required from me.
- MongoDB integration - Spring Data MongoDB and you only need to define interfaces using naming convention. The code to access the actual database is generated for you.
- Spring Security supports multiple authentication mechanisms and gives you neat DSL to configure it.
- File system access kind obvious thing.
- HTTPS and HTTP2 support provided by Spring MVC as well.
- Compressions support - it just "server.compression.enabled=true" in your config
- Logging - slf4j + logback come with Spring Boot and there plenty custom appenders available to put you logs into logstash/splunk whatever
- Referrer spam filter - not sure about that one but CSRF protection comes OOB and enabled by default.
- Multiple tightly integrated template engines to chose from. Zero configuration code as well.
- Static content comes OOB and enabled by default, just put your stuff into resources/static.
I mean yeah, modern webapp is a complicated thing! So whenever I see somebody trying to do anything "not bloated" it means that I end up writing low level code that has been written multiple times again and again.
The other day I was trying to code a simple thing in Clojure because I love Lisp. Well, it's just embarrassing. I got to simple page showing stuff from Postgres and the boilerplate/business code ratio was at about 70%. Manually configure connection pool, manually start it, manually prepare your component systems, manually mention all of the dependencies for components, manually configure template engine, manually enable static resources support in ring, manually configure and enable session support in ring. Then we come to authentication and don't even try to sell me Friend. EVERYTHING is manual. The only good thing was "environ" which did the right thing but again with "bloated" Spring Boot it comes OOB and I don't need to configure it!
If you don't use something "bloated" it only means that you're writing code yourself, again and again.
No, Spring Boot is one of the best examples of the worst kind of terrible patterns in the land of Java development. The bloat in that framework is awful, and the gods help you if something goes wrong in the annotations-everywhere code for anything but the most trivial of applications.
If you limit the annotations to only the basics,(controller,config,bean,requestmapping,etc.) - not much can go wrong. It sounds to me like you haven't worked with any large spring boot apps and experienced the stability annotations can provide.
Except circular dependencies. I work on an app where a circular dependency failure happens depending on what order spring finds our annotated classes. It made writing a faster bean scanner a little tricky because I had to replicate Springs ordering method.
To clarify what you said - You wrote a custom "fast bean scanner" and it's not working properly? Or you had to re-write it because spring's bean scanner wasn't working? What version of spring is this?
Ah, the spring bean scanning was working, but startup wasn't. The reason? Our app was apparently very brittle and the mere act of registering beans in the wrong act would cause a circular dependency error during startup.
To be more concrete, to the original spring bean scanner, we were passing in a set of package names, which it would scan. Spring registers those bran defininitions in yhe order that it scans the beans. My custom scanner (found and registered all the same beans), broke our app because it wouldn't startup anymore due to a circular dependency error. Once I sorted the bean definitions by the original pack path inputs, that startup error went away.
I think we are 4.x
Extra details: I used the fast -classpath-scanner library. I subclassed the annotation cand date componend scanner class (well, something like that), and rewrote a method to load the resources for the string specified, treating the path a a fqcn, not a package path. Then I could feed that class the output from the fast classpath scanner (which was the list of classes with the annotations). Until I sorted the input by the original package paths, my app wouldn't start. Mind you, the method I overwrote simply created bean definitions. But that ordering difference made all the difference.
I can dig up exact class names if you are curious. The scanner of course didn't replicate all spring bean sear check capabilities - just the ones we were using. But it cut the scan time by 60% (several seconds).
I haven't. But I definitely will try to use it, I'm really eager on getting to the same level of productivity in Clojure that I have in Java with Spring. I have a huge hope that Clojure + something like Spring Boot for it could make me even more productive. Some of the stuff that we have in Clojure really is wonderful, hugsql for example.
You'll never get anything close to Spring Boot for Clojure. Spring grew out of J2EE and Clojure's culture is diametrically opposite, favouring curated libraries. Clojure's main/(only?) web framework - Luminus - has a very small team behind it, though it does a fine job.
Nevermind the additional complexity of integrating with payment gateways and handling complex authorization requirements and team invitations etc.
Even a 'simple' web app is a convoluted mess of shit if you are to run a real world production grade system. I'm so sick of all these 'hello world' toy examples.
It's interesting that you mention gulp and webpack when those tools too are now considered too complex and set to be usurped by something like Brunch.
It's a shame these tools keep being rewritten because there are definitely good ideas in all of them, but for some reason they can't seem to be unified.
All these tools start off as simple alternatives to the existing bloated tools. Then as they gain more and more features to support real-world situations they end up becoming as bloated as the tools they set out to replace.
Webpack and Gulp were never meant to be simpler alternatives. Gulp used streams instead of serially processing files like Grunt, which made it faster, but obviously streams working in parallel are more complex than just processing the files one by one.
Webpack pulls dependencies into one file and deduplicates them. This is obviously even more complex since now you have dependency resolution logic as well as dealing with the various module systems JavaScript has invented.
Right. I'd just hope that at some point that cycle ends and people try to fix existing tools instead of replacing everything wholesale with something new that will eventually fail again.
> It's interesting that you mention gulp and webpack when those tools too are now considered too complex and set to be usurped by something like Brunch.
While Webpack is a little dense, it appears to strike the right balance between complexity and customizability (and probably more important for longevity, library buy-in). It doesn't seem like anything on the horizon is going to unseat it anytime soon... certainly not Brunch.
Brunch is not usurping Webpack. But these are tools built on top of Node.js. They're for front end development, and have nothing to do with running a Node.js server on the backend.
Somewhere along the line, we as developers have abandoned the Unix philosophy, especially the oft-forgotten second part ("Write programs that do one thing, and one thing well; and write programs that work well together").
Without the ability to compose multiple small libraries to form the exact solution that we need, we had little choice but to rely on the One True Framework to solve every problem that we will have.
This means if the One True Framework doesn't serve the exact need you have (and it almost definitely won't, there's a combinatorial number of requirements out there), it's time for a rewrite!
NPM is the closest environment to the Unix philosophy apart from Linux. Lots of small packages instead of a large base class library like all the other language ecosystems.
The thing is, even with Linux distros, most of the stuff you want is built-in by the distro. Once you start to add your own stuff, it can get really ugly and you have to be really pro to get anything done. It seems like every time I'm working on updating a Linux image I have to do some really bizarre thing where the package manager doesn't even work right and the instructions or some forum have me doing some mind-blowing workarounds I don't even understand.
So I think you are combining two different topics. I am all-in for libraries over frameworks. But the larger, more heavily curated libraries where you only need minimal customization are just objectively better. Having a large, curated standard library != a framework.
I'm lucky enough to have been around the industry for a while. I could probably count a dozen or more things that started out "Like X, only without all the BS!" --- only to end up with just as much BS or more than X ever had.
It's an anthropological effect, not a technological one. A new generation of craftsmen faces a choice between submitting to the rules of the old guard and making up their own rules.
Reduced complexity is often a rallying cry, but I think the root of the phenomenon is in trying to find one's own social and professional standing in the situation where all the prominent positions are already taken and what little is left requires years of hard labor (complexity, certification, corporate review system, etc).
If this situation upsets you consider the alternatives, they might be worse.
This reminds me of when I first read about fractals: a collection of phenomena I've been staring at all my life, but never really saw until someone pointed out how to see them.
Currently popular music mostly sounds like noise to me, but that's not the point. What are the current generation of musicians supposed to do, be silent and spend their lives listening to the great bands of my youth? It's impossible to match Pink Floyd in the style of Pink Floyd. They need a new style.
Facebook is losing traffic to whatever is the latest trend in social media, not really because people are suddenly paranoid about privacy, but because each generation needs a network where the previous generation is not.
And for as long as humans write programs, there will be a need to invent new languages, not because the old languages were technically inadequate, but because each generation of programmers needs a way to escape the shadow of the previous generation, the way acorns need squirrels to carry them away from the shadow of oak trees.
Exactly. It's not just music, consider modern and post-modern art. Even earlier art can be seen as a form of protest against the tyranny of the establishment of yore.
It's interest to see which parts of our civilization came which side of the divide. Market economy, for example is a great way for a young enterprising person to find their own footing away from the old (hence startups). Academia OTOH went totally the other way (hence grad school).
..and this is the problem. The assertions of escape. I don't understand it. I would write prose the same way Jack Vance (did) and Gene Wolfe (does) if I could write. There can be one true way for the expression of logical thought. We can try exceptional dialects but they fail because they cannot encompass every concept we should expect and they cater to the ego and domain.
OTOH, the young always have a fresh perspective and they
usually have good ideas based on the times. They should be
listened to and mentored. Very few active older (1995+) folks left in IT ,after the method management and purposeful purging methods of the last 10 years, to mentor them. Most of us weren't great at teaching anyway. It was a paycheck.
Tech is ripe for applied group psychology and anthropology. The social, psychological, and anthropological factors are obvious to casual observers -- but completely invisible to the people they affect the most.
There's a reason for it, and perhaps overall it's a good thing....but that still doesn't mean that it can't be accepted and acknowledged as a facet of the community.
Yes, this is not only about tech-sector. You see it everywhere there is humans. There is always new people thinking that why do they do everything so stupid? I can do it much better if I do it this way. Sometimes they are right and everyone is happy, sometimes they do it exactly the same way it was done ages ago and found out to be inefficient or fail in some way and they get to hear "There was a reason we did it the other way".
I try to explain why we do things the way we do and if they still want to try to change things, I make sure it's easy to go back again in case it fails the usual way.
I am surprised that you've only mentioned npm. The current complexity of webpack and the number of frameworks, the language going over multiple ES revisions and type-safe alternatives to the language makes it seem like the eventual complexity is almost unavoidable and simplicity is a marketing fad.
This was one of the more interesting software talks I've listened to recently. I like that it was very real - there are serious, serious problems with Node.js, and the fact that even the creator acknowledges these problems caught my attention.
I'm also a long-time user of Dart, so when he brought that up, and compared TypeScript to its shortcomings, I definitely agreed.
That being said, even with the Deno project, I'm not so sure what can come in terms of performance and security from running JavaScript outside of a browser. The choice of V8 also raises concerns for me about build systems. He mentioned the failing of GYP, but anything using the same build toolchain as Chromium always introduces a wealth of complexity for anyone building said software, as not only do you need Google-specific build tools, but also very specific configuration including Python.
It will be interesting to see what comes in the future.
If it were up to me (which I guess it isn't), I'd probably prioritize portability/less complex builds, built-in FFI, a flat module system, and optimizing calls to C from JS.
> performance and security from running JavaScript outside of a browser.
I'm just now building a node app to filter point clouds, so lots of number crunching. In two days I've got something in javascript that's faster than the C++ solution I've been working on for a week. Mostly because javascript/node makes it trivial to parallelize file IO while doing work in the main thread. This app reads 13 million points from 1400 files (~200mb), filters all points within a clip region, and writes the 12 million resulting points in a 300MB file, all in 1.6 seconds. (File reads were cached by OS and/or disk due to repeated testing, but file writes probably not)
My personal conclussion is that javascript can rival or even exceed the performance of C++, not because it's inherently faster, it's obviously not, but because it makes it much easier to write fast code. For the highest possible performance you'll defenitely want to use C++ but sometimes you'll have to spend multiple times the work to get there.
Right, V8 works great at optimizing JS and it handles streams great. Productivity is one of the most important factors to think about when building software systems since human time is much more expensive than CPU cycles. That's why Node.js works great.
This fails from the false dichotomy of speed of execution vs speed of development (which includes fixing bugs). Well written languages optimize to a certain weighted preference of the two and some languages deliver more of _both_ than others.
For example; Typescript is fast to write. and Golang is reasonably quick to write, _and_ execute. Both should have ~15% less bugs than javascript, potentially making them faster to develop (where bugs matter).
Once you've paid the upfront cost of learning Golang, I might agree with you. But then again, in particular when building a full stack app (and not when on a team that has back end and front end specialists), it's helpful to use the same context (JS/NPM) when devving. People are bad at context switching.
I've heard this argument several times now and it finally hit me what I dislike about it.
If you aren't context switching between your backend code and your frontend code (even when both are JS), you're probably incurring technical debt in your architecture to be paid in even greater numbers of dev hours down the road.
When you are writing an all-JS full-stack app, do you really feel like you're only working on a single app, as opposed to two different apps which happen to share the same repo?
It's not even likely they're in the same repo when working with Node (although possible). The context switch between apps is one cost you have to pay (surely your backend will differ from frontend in architecture). However, the costs of switching languages is higher. Golang / JS conventions are very different. It is possible to share some common libs front end / back end (lodash, validation logic, etc) and that helps too.
> The context switch between apps is one cost you have to pay [...]. However, the costs of switching languages is higher.
Is it? I was working with a system where server is written in Erlang and
client (and another server) is written in Python. No problems with switching
back and forth.
Yes, that's a cost your brain is paying. And a higher cost if you bring on new devs to that project, who then need to learn and understand both Python and Elixir. Debugging is different in both languages, libraries are different, standards, conventions, top level APIs, runtimes, capabilities, all that has to be understood to operate at a high level. That's a non-zero cost and it's pretty significant. That you've learned both so well that you can switch between them is great, but that's equivalent to knowing two musical instruments as well as one.
>> I was working with a system where server is written in Erlang and client (and another server) is written in Python. No problems with switching back and forth.
> Yes, that's a cost your brain is paying.
What cost? I said I haven't noticed any.
> new devs [...] then need to learn and understand both Python and Elixir.
No, they don't need to learn even a speck of Elixir.
--
What you described is a trusim that one needs to learn two languages to write
in two languages. Yes, this is obviously true. What I'd like to hear is an
argument that switching between them used in a single system is costly,
because I haven't observed that. This is what I discuss with, not with that
learning another language has its cost.
There are more things you have to remember. Workflows in both languages. Of course it's more stuff, thus more context. And you have to use both languages constantly to stay fresh in them. The syntax isn't the only problem, just the easiest one.
Is it just as easy to maintain Spanish and English skills than just English?
"I don't notice it" isn't a very strong argument. I bet you don't notice the effects of slight dehydration and your diet and exercise on your output either. But if you were actually experimenting with it, I guarantee you could soon perceive it.
How wouldn't there be a cost of switching between two languages? Normally. You
could try, you'd know then. Though the prerequisite is a system that is
designed, with clearly designated borders between the parts, not a system
that has just emerged.
Proving something's non-existence is a little like proving that you're not
a weapons smuggler. How would you expect to even start?
> There are more things you have to remember. Workflows in both languages. Of course it's more stuff, thus more context.
But this is irrelevant to switching between the languages. You have just as
much to remember if you write unrelated things, each in its own language.
> Is it just as easy to maintain Spanish and English skills than just English?
"Just as easy than"? Really? In a thread about languages?
You picked wrong analogy. It is just as easy to write prose with every second
paragraph in English and Spanish as it would be with just English. The
prerequisite, obviously, is that you know both languages.
> And you have to use both languages constantly to stay fresh in them.
For some value of "constantly". It's not like people forget everything about
a language when they don't use it for a week or a month.
> "I don't notice it" isn't a very strong argument.
Well, at least it's some argument. On your side is only "how wouldn't there
be a cost?", clearly from a position of somebody who doesn't use many
languages.
Right, but the cost of learning Go is ~1 afternoon :p Anyway, I don't really buy the argument that there are efficiencies from using the same language on the backend and frontend. I think there are efficiencies from using a language you're more familiar with or languages that are less error prone or more ergonomic or which have better tooling/ecosystem/etc, but I've never had a problem I would chalk up to context switching.
what is the 15% figure from?
I'm suddenly working with a dynamically typed language (elixir) coming from scala and I do find more bugs. Would like some hard evidence to support that so just curious.
> (File reads were cached by OS and/or disk due to repeated testing, but file writes probably not)
Unless you flush the pages manually, your dirty pages (written files) live on long after your process died. Depending on system and configuration, minutes or even hours can pass before they are flushed to disk.
Do you know about best practices to benchmark with uncached data? It's something I've been wondering about for a long time and I've seen many attempts at benchmarking things without regard for disk caching. e.g. benchmarking in-memory data bases to out-of-core databases but because of caching due to repeated runs, the results were meritless since the out-of-core databases had their stuff in memory as well.
Can you share some sources? I tried to process simple CSV files in a very straight forward (but async) way and got reading 200mb CSV and just splitting it to columns (with simple split by comma) takes ~10 seconds.
Also simplest HTTP request in express is handled in several ms and that's A LOT in my opinion.
What happens is that ~1000 files will be loaded in the background, points are filtered in the main thread and even while some files are still being loaded, we already start writing the results to the output file.
> I tried to process simple CSV files in a very straight forward (but async) way and got reading 200mb CSV and just splitting it to columns (with simple split by comma) takes ~10 seconds.
CSV may be much bigger challenge since it's ASCII data. That always tends to take multiple times longer.
If file buffering is the bottleneck have you tried using tools like cat to buffer the input? Buffering input and output is the sort of thing you get for free in unix.
I'm reading the values directly from the Buffer object that is returned from fs.readFile using buffer.readUInt32LE and buffer[index], then convert the values to doubles to do the hit-test and if it succeeds, it's immediately written to the output buffer.
The nodejs Buffer object is a subclass of Uint8Array as far as I know.
On writing, the double coordinate values are transformed back into a fixed-precision integer format and stored in the output Buffer object.
I'm not generating an intermediate buffer since that does decrease performance a bit. It's directly from input buffer to output buffer. Output buffer is initially allocated with the same size as the input, and before sending it to the output stream it's cut to the actual size.
One thing I've previously learned and which has shown to be still true is that writing individual bytes to a buffer is faster than writing integers.
So originally I did this:
outBuffer.writeInt32LE(ux, outOffset + 0);
But this turned out to be significantly faster. (~20%)
// do once
let tmpBuffer = new ArrayBuffer(4);
let tmpUint32 = new Uint32Array(tmpBuffer);
let tmpUint8 = new Uint8Array(tmpBuffer);
// do many times
tmpUint32[0] = ux;
outBuffer[outOffset + 0] = tmpUint8[0];
outBuffer[outOffset + 1] = tmpUint8[1];
outBuffer[outOffset + 2] = tmpUint8[2];
outBuffer[outOffset + 3] = tmpUint8[3];
The buffer contains not just integer but also byte, unsigned byte, short, etc. Due to this, the buffer length may not be a multiple of 4, and Uint32Array views can only be constructed on buffers with a length of a multiple of 4.
Edit:
Also, the stride from one record to the next can be anything, e.g. 15. An Uint32Array would need a stride of 4 or a multiple of 4 to be useful.
Edit2:
I could try to create 4 Uint32Arrays with byteOffsets 0 to 3, and with a view length of a multiple of 4, then use the one that works with the attribute I'm currently processing. Not sure if that's really going to be faster but who knows.
> the fact that even the creator acknowledges these problems caught my attention.
Not to take anything away from Ryan Dahl's ability to reflect and be open about what he considers his design mistakes, but it probably also helps a bit that he walked away for quite a while before coming back. A bit of distance helps in these matters.
"I'm not sure what can come of performance and security running JavaScript outside of a browser" well we already know about performance. Ryan was mentioning and demonstrating that JS is in a sandbox already and if modifying node to have message passing with dispatchers mirroring on native and js sides, you in effect have very tight security as you are strictly allowing only sys calls that are going through a single point and can be managed through a single point. Seems like that one is figured out with Deno.
Having worked with Maven, Gradle, Ruby Gems, Pip and the non-existing Go package management I must say I actually really like the Node / NPM combo. I guess artists are their own worst critics.
edit: forgot Scala's SBT, admittedly a builder using Maven repo's but still an excellent example of how bad UX in this area can get.
You'll find much harsher critics of Node/NPM in these parts!
I thought Ryan did a great job of explaining his regrets without giving the impression that Node was a "mistake", is "inferior", or anything so drastic.
> You'll find much harsher critics of Node/NPM in these parts!
They're ill-informed. GPP is correct that, for example pip is fundamentally inferior to npm [1], and those that insist on throwing shade at npm on HN should be corrected. They're wrong, and insulting a sound, well maintained project, without basis.
> those that insist on throwing shade at npm on HN should be corrected.
Preferrably by giving them better ammunition, since I do see NPM as substandard in quite a few ways, which is inexcusable when there do exist examples to learn from (whether it be a positive or negative influence).
First, it helps to clarify whether we are talking about npm the client or NPM the repository and ecosystem. Client issues are generally easily resolved, just use a different client. For npm, this could be yarn. For cpan, this could be cpanm, or cpanplus, etc.
If it's indeed the repository we are talking about, there are some obvious things that could be done to greatly improve it the NPM module ecosystem. For example, how about automating module tests against different versions of Node to determine whether it's in a good running status now for the current and prior interpreter versions, on the platforms it can be run on? [1] How about a prior version, in case you're trying to figure out if the version you're on has a known problem on the platform combo you're running on? [2] Or perhaps you want to know what the documentation and module structure looked like for a module a long time ago, like20 published versions and over a decade ago, because sometimes you run across old code? [3] Or as an author, the ability to upload a version, even for testing, and getting an automated report a couple days later about how well it runs on that entire version/architecture matrix with any problems you might want to look into?
In case you didn't notice the trend, I'm talking about CPAN here, which has been in existence for over two decades, and many of the features I've noted have been around for at least half that time. All in and for a language that most JS devs probably think isn't in use anymore, and on encountering a professional Perl developer would probably think they just encountered a unicorn or dinosaur.
Sure, NPM isn't all that bad compared to some of the examples that were put forth, but the problem is that those examples are a limited subset of what exists. Given the current popularity of JS and the massive corporate interest and sponsership, I frankly find the current situation somewhat disgusting. The only thing keeping JS from having an amazing module ecosystem is ambition. Sure, NPM might be a sound, well maintained project (points I think are debatable), but it could be so much more, and that's what we should be talking about, not almost annual fuckups[4] they seem content with dealing with.
> there are some obvious things that could be done to greatly improve it [...] it could be so much more
As with much of programming language design and implementation over the last 3+ decades?
below> that could have been mitigated or entirely avoided by surveying best practices
Yes, people who would spend their time working on language designs and implementations, should at least be familiar with the many surveys of best practices. Surveys of repos, type systems, memory layout, parallelism, and so much else. Language choices are intertwined and subtle, and adhocery has enormous downstream costs to the field and to society. The programming language design and implementation wiki exists for that reason. To accessibly distill our collective experience. Not using these resources is negligent - a disregard of our profession's responsibilities to society.
Oh, wait. Our field can't be bothered to create surveys of best practices. Or a wiki. Knowledge is inaccessibly dispersed among balkanized and siloed human communities, assorted academic papers, and scattered code.
Shall we continue to blame pervasive failure on individual language developers and tools? For how many more decades? At what point do we start addressing it as a systemic problem?
:) So I agree with your observation, but suggest the problem extends far beyond package management systems.
> Sure, NPM isn't all that bad compared to some of the examples that were put forth, but the problem is that those examples are a limited subset of what exists.
That was all I was responding to.
I definitely learned some cool stuff from your comment, and appreciate that, but my point was simply that the all the drive-by FUD that npm gets on HN is unwarranted.
> I frankly find the current situation somewhat disgusting.
This feels so hyperbolic though. The things you mention are cool 'nice-to-haves', to say not having them is 'disgusting' is a huge stretch in my opinion.
> This feels so hyperbolic though. The things you mention are cool 'nice-to-haves'
What I find somewhat disgusting is the massive amount of mistakes they've made over the years, and the time they've had to take to fix them, that could have been mitigated or entirely avoided by surveying best practices from other package management systems that have gone through the same pains.
2018-05-28 - ERR! 418 I'm a teapot (this is not a joke)
https://github.com/npm/npm/issues/20791
https://news.ycombinator.com/item?id=17175960
2018-02-21 - Critical Linux filesystem permissions are being changed by latest version
https://github.com/npm/npm/issues/19883
https://news.ycombinator.com/item?id=16435305
2017-08-01 - Typosquatting package names
https://twitter.com/o_cee/status/892306836199800836
https://news.ycombinator.com/item?id=14905675
(a little obtuse, but moderated package namespaces with
trusted maintainers can mitigate this, and spread load
from levenshtein distance checks.)
2017-11-03 - Visual Studio Code 1.7 overloaded npmjs.org, release reverted
https://news.ycombinator.com/item?id=12860806
(10% increase in NPM load, specifically to 404 pages,
causes NPM to fall over due to naive 404 handling and
apparently, poor ability to scale. Good thing they
caught it at 10% instead of the 200% it would have
reached...).
2016-03-29 - changes to npm’s unpublish policy
https://blog.npmjs.org/post/141905368000/changes-to-npms-unpublish-policy
https://news.ycombinator.com/item?id=11382885
2014-02-28 - npm’s Self-Signed Certificate is No More
https://blog.npmjs.org/post/78085451721/npms-self-signed-certificate-is-no-more
https://news.ycombinator.com/item?id=7320833
2012-03-08 - npm (Node's package manager) leaks all user password hashes and salts
https://gist.github.com/jashkenas/2001456
https://news.ycombinator.com/item?id=3679996
That's just from the first page of the HN search I included previously (link 4), I doubt it's really exhaustive. Now, to me, that list of problems would be bad enough, but NPM is actually run by a for-profit company, and gates certain features behind paid accounts. So what we have is a business, catering to what is likely the largest current group of developers that exist, for a language with corporate backing by multiple very large companies, providing vital infrastructure support for that language and those users, and getting their asses handed to them in comparison to some others who are manned by people volunteering spare time, skill and equipment.
I mean, I would cut them a little slack if they seemed to have plans for making stuff better and a roadmap and it was just a matter of time, effort and resources they were lacking, but it seems to continuously be a case of them waiting until the shit hits the fan and they're forced to first take a look and see how to fix this new problem they've never envisioned, and then figure out their solution. Sure, it can sound hyperbolic initially, but I think that's just because people haven't really stopped to take stock of what's really going on here, and how it's not really getting better in any useful way. In the midst of emergency fixes is not how you should plan your new features. :/
Recently, when I use npm, it mostly just works. There's still the occasional node/npm version mix and match to get certain libraries to work and accidental sudo; the former might just be the poor quality of the ecosystem, and the latter is almost just user error.
I'd put it par with rubygems, ahead of pip, gradle, maven, a little bit behind mix, and far behind cargo. Not a bad spot to be by any means.
For the purposes of this discussion, it is useful to note that cargo was written by Yehuda Katz (wycats), who had previously written Bundler, and so actually had some concept of what mistakes he had made before and experience specifically in this area, in order to apparently (I haven't used it yet, but I have heard lots of good things) finally have built something truly great.
I've worked with Maven, Ruby's gems, Python's pip, whatever Go's non-existent package management is called, and Node, via npm and yarn. I'd have to say my favorite tooling and package management is found in Elixir's mix utility though. I don't mind the others. They are all decent enough, but I think the Elixir team really nailed it with mix.
* The maintainers have pushed several breaking updates by mistake (I'm a teapot recently).
* There have been a few cascading failures due to the ecosystem (leftpad).
* node-gyp (alluded to in the talk) break cryptically on install in different operating system/package combinations. It also obscures the actual package contents.
* The lack of package signatures and things like namespace squatting significantly hurt the overall security of npm.
And let's not forget how terrible things were pre-yarn with the nested folder structure of node_modules and no lock file.
Compare that to NuGet where I've literally never had any of these problems.
Here's one thing Node and npm are great at and NuGet fails at completely: local development of two packages. With npm, I can use "npm link" to redirect a package reference to a local folder. With NuGet, the best you an do is edit the .csproj and change the nuget reference to a project reference (if you can find the original source code). This makes simple step-through debugging across package boundaries a chore every time, whereas a source-based package system doesn't have this issue.
"npm link" only establishes a symlink between the two directories and doesn't respect .npmignore or behave in any comparable way to publishing a package and installing it. Sometimes the only way to debug is to repeatedly publish and re-install the package you are developing.
I can't argue with you, but "most of the time" it works. Not that I have a choice (unless I stop being a frontend developer), but the time we spend with node/npm debugging is not critical in the timesheet/log.
I've worked with all of these as well, and npm is probably my least favorite. Above all else, I expect my build system to do ONE THING:
Exactly reproduce a build at a later date
Part of it is technological (npm didn't have package-lock.json until very recently), part of it is organizational (the npm repository is surprisingly fluid), and part of it is cultural (the JS community likes zillions of tiny constantly-changing libraries). The net result is that I cannot walk away from a JS build for three weeks without something breaking. It breaks all the time. UGH.
Why I admire Ryan is "I shouldn't just complain without giving a solution..." and he gave a solution, more than once. I have done this on a no-one-cares scale but it really is better to do yourself when you can. Also, Ryan has some sharp sarcastic wit which is pretty fun to watch on this talk.
I thought that was nice too, but I wish we as an industry placed a little more value on simply admitting that something is bad or suboptimal. You're not supposed to "complain" or "be negative", which I think is unfortunate in lots of ways.
Yah I get that. I remember at my work, I said something negative about how we were sending out emails that they looked like they were from 1990. I got a lot of backlash for complaining but next day gave them a modern solution and bam, they were super happy. Maybe its just human nature but providing something to fix it rather than just being unhappy alone is more powerful.
Haha! Ryan may not have been the best person to design a novel package manager, and while he has some insight into his mistakes the first time around, he still may not be. Neither am I; call me if you need a compiler or a GUI. Everybody has a different set of areas where they have gone deep and developed good instincts.
I dunno, importing from a url seems really smart and practical to me. It divorces the run-time (Deno) from a package manager (like npm). I wonder how it handles dependencies though.
Is it synchronous? Does it follow redirects? What happens in a 404 situation? Does it obey cache headers? What happens when a timeout occurs or the resource isn’t code? What happens with recursive dependencies and other edge cases as a result of not knowing the dependency tree until runtime? What about error handling and recovering from these failures at runtime or compile time? Should all resources be secure? How does it work when you are developing on one of those dependencies? Do you have to run a local web server? How does versioning work? Where is the cache stored and is it content addressable? How to clear it? Is it global or per user?
I could go on but you get the idea. A package manager is not simple and requires A LOT of choices. The best one I’ve seen is old CPAN.
The Web has all of these issues and yet importing JS libraries this way has worked just fine.
Obviously it's not what you should do if you are publishing a library for others to use, but for local use, and the kinds of exploratory scientific computing he was talking about, it sounds perfect.
We have those things as a result of having too much code pulled in from too many sources. This has approximately nothing to do with URLs as identifiers, which is one of the foundations of the Web.
It doesn't, it uses $GOPATH. The packages are downloaded using `go get` in a directory structure that mimics the full url: ./github.com/user/repository/...
The main problem is that URLs offer even fewer immutability guarantees than package management servers do, plus they're even easier to 1) break and 2) attack.
One takeaway from this is how Microsoft is killing it currently:
* Much-beloved TypeScript
* Much-beloved VSCode
* Much-beloved GitHub
If they hire Ryan to flesh out his vision for deno we'd probably need a new acronym, MAFANG
On a more serious note, I wonder if deno could support a lower level construct like Observables. As much as Promises are perceived to be an improvement over callbacks, they still have major flaws (only one of which is mentioned in this talk), this is something that Observables can address
I've already seen the acronym expressed as FAAMG, which is probably more accurate to start with; Netflix has great performance but is nowhere near the market cap of the other companies.
Does anyone remember the joke/hoax, back in the old days, of the alleged Microsoft Linux distro? :D (Of course, this was in the age of the Halloween Documents and the "Linux is cancer" mindset [0], back when Microsoft was a very different business).
I've said it before and I say it again; NodeJS is an infrastructure component, not a general purpose application runtime environment.
I totally recognise the IO problem in our connected world and NodeJS really does solve the problem around the "many simultaneously persistent connections", something that would be really hard to do without something like NodeJS. In essence (and in my humble opinion) NodeJS is basically a programmable socket server and it's main feature is "websockets".
Writing software applications in NodeJS is the most awkward experience due to it's async nature. Business logic is inherently sync, not async. Most of NodeJS's existence has been trying to find an elegant solution to make this async behaviour look like it's sync, from callbacks, to promises and now actual language features added to JavaScript itself (async/await).
The problem with Javascript on the server is not Javascript, but the runtime in which it's executed.
I think it would be interesting to have an sync version of NodeJS that acts more like traditional Ruby and Python next to the async variant that we currently have. Both types could then be used along side each other, each solving the problem for which it is best suited.
I'd say your intel is far out of date. Async-everything + async/await makes Node more elegant than the same programs in Ruby/Python.
Even little things like "make these two database requests in parallel and wait on them both" or "process these urls but only have 8 requests in flight at a time."
I'm always impressed at what one thread can do with non blocking io. I think node strikes a decent balance on parallel programming without the pitfalls of threading. GO is undoubtedly better at lightweight threading, but it's not as simple to grok (or debug)
There's many ways to do this, but at its simplest because Node is single threaded, you can have a shared array of URL's and then a pool of workers who .pop() a URL off and stop when there are none left. Each worker's http request runs async, so with the `async` keyword at the start of their signature, and `await fetch(url)` inside, the code reads (and in some ways behaves) like regular synchronous iteration but runs (mostly) concurrently.
> NodeJS really does solve the problem around the "many simultaneously persistent connections", something that would be really hard to do without something like NodeJS
What part of it is really hard to do without NodeJS? You can do this easily in Go, Elixir, C++, Rust, etc.
The nice thing about Node.js's design is that (almost) all IO is non-blocking by design, so developers can't block their programs on IO by accident.
In other platforms developers need discipline to choose non-blocking APIs over blocking ones, or to wrap their blocking calls in async execution contexts like threads or coroutines.
All the "sync" commands are blocking. In my experience developers need discipline to avoid using those and blocking the whole app.
Of the languages mentioned I believe Go is actually closest to all io being non blocking by default; with the caveat that you are using multiple go routines.
I'm just disagreeing that it's "really hard" for languages besides JavaScript. Concurrency in Go is stupid easy, it's built into the language. Pretty sure Elixir is the same story. C++ and Rust are slightly more difficult but still nowhere near "really hard" when it comes to making a socket server that can serve hundreds of thousands.
>Writing software applications in NodeJS is the most awkward experience due to it's async nature. Business logic is inherently sync, not async.
The first thing I did for a latest project in node was a sync =>{done()} queue for ws requests. There is a room for fine-graining, but it is a routinely job of determining parallel cases that don’t hit db “ci” in acid much. Not a big deal. The choice for node was obvious in that it has most straightforward installation and maintaining on windows boxes (I’m language and OS agnostic [except js allergy], but peers aren’t). I think that async is yet another hipster bullet, since our to-profit businesses rarely need much traffic and those that need are barely fit for 10kloc of js.
It’s awkward personally not so because it’s async, but because it’s too low level and npm doesn’t feel like a well-documented and thorough module set. It lacks unixlike formality and strictness in phrasing, oriented on beginners, but not on -pedantic needs. It’s hackerish and cool until you meet borders that are required beyond the “move fast” thing — it slows you down. And I didn’t meet a good abstraction over ws and db layer yet. It feels like a bicycle and a shovel when you have to dig a career (pun intended).
https://github.com/laverdet/node-fibers Is a project that enables writing code this way. It's used, for example, in webdriver.io so that you can write synchronous test code that transparently (ie, without `await`) calls out to async helpers.
Except javascript's design is completely antithetical to those use cases. It has no type system, its performance is hard to profile and reason about, single threaded (unless you want to start forking and managing threads in javascript...), weird edge cases, piss poor numeric computation, etc etc.
Actually, I can’t think of a single issue that npm doesn’t also have.
Possible issues:
- trust
Not an issue since if you trust an author’s packages on npmjs you should also trust them from unpkg, github, etc... (Aka there is no trust in npm land except trust in the author)
- Versioning
Put the version in the url instead of some json, same difference.
- Lack of ^ for automatic upgrades
That’s a feature. Avoids the need for lock files.
- Repetitive
Have a dependencies.ts to group external imports, same thing as package.json
I'm not a huge NodeJS developer, but have done enough (and other development) to think this is not a good solution to the packaging/dependency problem.
My experience with npm issues were usually that some dependency had its own build process.
There are so many inconsistencies in how JS libraries are packaged and downloaded. I think scripting in general leaves the whole dependency thing out in the wind.
Now. If we could turn packages into testable objects, that would be cool.
And also, golang tried no centralized package management by using git repo. It didn't end with people go getting from moving masters. Of course they did.
And in an area of finally accepting lock files, do you really want to go back in middle age ? Lock files are not a constraint. They are a god saver. You want lock files. You don't want to have either vague dependencies or pin pointed ones. You need both to stay sane.
And of course somebody will do something dynamic that will open a security issue.
And of course typo squatting is going to be so much easier.
And removing a bad lib ? From npm you signal the admin. When it's on it's own domain ? Good luck.
And then searching for libs is going to be fun. And naming, naming will be amazing.
And having a quick glance at the dependency of a lib ? So much fun.
And wait for the search/replace in the code that will change your entire run time by mistake.
And no possible alternative package manager. Hope their dependency resolver never sucks cause you will be stuck till you can install the next node... if they ever fix it.
Ah, and the git blames to see what dependencies have changed are going to be just peachy.
Oh, and your juniors copy/pasting code from the net is going to get extra crispy.
But wait, running the code on conditional imports means your project may install something at ANY moment in its life cycle. And could be changed by somebody editing the code by mistake without really asking to change a depandancy. You know, like a ctrl + D on "0.1" to replace all those floats quickly.
Also cool URLs don't change. Until they do. Tiny URLS dependencies are going to be hilarious.
I could go on and on and on and on...
I can't understand how you can be intelligent enough to code freaking node JS and not see THAT elephant in the doll room.
You are imagining the worst possible execution of these ideas. Nobody is proposing that you should start using libraries that pull in code from random domains, unless you have some specific need to. Whitelisting sources is such an obvious step, given the security focus, that you should really have applied the https://en.wikipedia.org/wiki/Principle_of_charity in your speculation.
Not OP, but a couple guesses off the top of my head:
* It becomes a more painful process to upgrade dependencies (have to find/replace across your codebase).
* Many versions of the same library get pulled in. If you depend on package@1.5.3 and a dependency of yours depends on package@1.5.4, that's twice the dependency size as compared to both just using 1.5.4. This matters more in the case of web app bundle size though than running local programs.
You could have an entry file for your dependencies (let's say dependencies.ts) and reexport everything from there.
When you have to update, you only have to upgrade it in the entry file and you can avoid multiple versions.
But this is exactly the way the Web works. Right? And that platform seems to have done alright for itself.
For libraries or professional projects, ship your dependencies or use a build system. But for one-off projects, being able to pull in a utility library without needing any tool but the interpreter you're using or some extra build step seems like a big win.
We invented bower or NPM because people thought it was a good idea to have 20MB of JavaScript on a page and 19MB of it written by third parties. Generally speaking, this turns out to have been a mistake.
Dart was supposed to replace JavaScript and was head-to-head with Typescript, CoffeScript and many others in that goal.
Obviously it failed at that as was rescued by the Google AdWords team that adopted it.
Now they are trying the 2nd coming of Dart via Flutter, it remains to be seen if it will ever take off, or how much Google is committed to release an productive version of Fuchsia.
That being said, the Dart project has long abandoned the goal of replacing JavaScript in the browser, and instead provides a VM, Flutter, and a to-JS compiler.
Many people do not know this, and have written the language off entirely. Hence a lot of the negative reaction to Flutter.
(Though, if I'm quite honest, this is a bit off-topic from the OP)
Don't forget that dart now has an AOT compiler and is strongly typed. It is close to Java in performance in a lot of stuff (faster in some).
Syntactically, Swift is almost identical (dart being older) aside from GC vs ref counting. I think it has more of a future in the Java managed language area (but with better typing, better syntax, and first class functions with closures)
I'm on the Dart team (but I don't speak for the entire team here). Dart had two initial goals:
1. Get a native Dart VM into Chrome and eventually other browsers.
2. Get a significant number of client-side web developers that were using JavaScript to move to Dart.
It's probably not obvious, but these goals are in tension with each other. In order to motivate adding a giant new VM to a browser, you need to make the language pretty different from JS. Likewise, you need to make your implementation much faster than JS.
Both of those push you down a path where interop with JS is difficult. You don't want your language's semantics too close to JS because that reduces the value proposition of the language. And you don't want JS interop requirements to limit how you implement the VM around things like garbage collection.
But for (2), to get people to move, you need the absolute smoothest migration path you can get. You'll make all sorts of compromises and edge cases in your new language to reduce friction when getting developers to migrate to yours and you'll do anything to make interop seamless to support heterogeneous projects. (For example, TypeScript pokes quite large holes in its type system in order to play nicer with JS idioms.)
The Dart leads prioritized (1) over (2). The idea was that the VM would be so great users would flock to it giving us (2). That didn't work out, unfortunately. In practice, I think it's very hard to create a language implementation so much better that it trumps the value of existing code. So you really do need to win at (2) at all costs, if you want to a successful web-only client-side language.
That's the approach TypeScript has taken, and they did a fantastic job at it. Having one of the world's best language designers doesn't hurt.
In the past couple of years, in response to this and other changes in the landscape, we pivoted Dart. We now aim to be a multi-platform client-side language. In particular, we're the application language of Flutter, a cross-platform mobile framework.
Flutter is a very different platform than the web -- there isn't an existing entrenched corpus of billions of lines of code. Performance and memory usage matters more. You can't JIT on all platforms. Developers coming to Flutter are equally likely to be coming from Android (Java) and iOS (Objective-C, Swift) as they are the web.
Those different constraints play well to Dart's strengths. And, in particular, they align nicely with Dart's move to a full, sound static type system. Dart 2 is more "C# with less boilerplate" than "JS with more types".
Dart is still also a web language, and the better static type system really helps with static compilation to JS, but it's not our only path to success.
Thanks for your insights. I’ve always liked the language Dart. It seems to me to be a fine language—a design that could be widely adopted. I was disappointed to see Google fail to achieve goal #1, I’ll be watching flutter. Good luck and thanks for your work on Dart.
> 1. Get a native Dart VM into Chrome and eventually other browsers.
> 2. Get a significant number of client-side web developers that were using JavaScript to move to Dart.
> The Dart leads prioritized (1) over (2). ... That didn't work out, unfortunately.
That's what I remember hearing when Dart was just getting off the ground. So what happened? Script tags can specify text/javascript or text/dart. Did the the Chrome team just veto any integration? Did a prototype into chromium ever even exist? Google put out experimental quic stuff out before so why not a new interpreter?
I'm convinced if it was out there we'd have seen some use and some hype! I'm a little disappointed to hear Dart pivoted away. Such high hopes!
How would you characterize the positioning of Dart with respect to Go? Dart is for client-side, Go is for server-side? I'm also curious if you see having separate languages for these roles as desirable or just incidental.
Disclaimer: I've been working with Dart for 5+ years now, I'm running several small server-side Dart apps myself, and I also contribute to the Dart app that is behind pub.dartlang.org
The Dart VM itself is great for server-side, however Google is focusing on the mobile and web tooling and support. While they do develop server-side packages e.g. for AppEngine-, gRPC-, or Memcache-support, connecting to databases like Postgresql is through a community-supported package, and sometimes it is hard to find an actively developed one.
Considering these limits, there are still good server-side frameworks, and there exists couple of big full-stack Dart applications. I've created a HackerNews-crawler twitter-bot (@DartHype) in a matter of hours in Dart, and it is running almost unchanged since then. Not that it is a big feat, but it was an easy thing to implement given the ecosystem.
If you have a fresh project, and you can select your database and other parts of your stack, Dart can be a good choice. Depending on the domain, the performance is close to the one in Go, or in Java VM, and it is much easier for beginner to pick up than other languages, while the tooling provides more safety than JavaScript or TypeScript.
However, if you need to connect to Sybase, it may not be the best choice.
The Dart community is tiny and package selection is incredibly limited. I'd argue that Dart would be a very poor choice for most developers. It's not easier to pick up than something like Rails or even Spring.
Your argument is noted, we just happen to disagree. I've mentored high school students for a couple of weekends to help them build their mobile app. Java (Android): struggle with the bloat. JavaScript (React Native): shoot themselves in the foot couple of times - lack of tooling. Dart (and Flutter): instant success.
The language, its consistent API, the IDE and tooling support with the static analysis is just great for beginners and advanced developers alike.
People like to hate Dart because it threatened to take away their beloved JavaScript. For those who have actually tried in the past few years, I only hear they wish their IT stack could be migrated to Dart. If you start a new project, choose wisely :)
I've recently started taking Dart seriously. Could you point me toward a good community / community resources? The official documentation seems a bet sparse...
> How would you characterize the positioning of Dart with respect to Go?
I think it's easy to over-estimate how much "positioning" Google actually does with projects like this. We are a very big company and different parts work fairly independently of each other.
Your description is how I think of the two languages, but you might get different answers from different people. I like classes and object-oriented programming in general, and I think it's a fantastic fit for UI applications. So I think Dart is an easier fit for that domain.
Meanwhile, Go's concurrency model and nice standard library seem to be a good fit for servers.
Given the breadth of software people write today, I think there's plenty of room in the world for lots of languages.
> The Dart leads prioritized (1) over (2). The idea was that the VM would be so great users would flock to it giving us (2).
This is actually why I, as a developer, decided not to use Dart: because it would give Chrome a competitive advantage and more control over the other browsers. Chrome has enough advantages as it is, and I like having competition in the browser market.
The hope was that other browsers would eventually have Dart VMs too and there would still be fair competition.
In many ways, this is similar the path that asm.js/WASM took. First it's polyfilled to JS in all browsers. Then some browsers get faster native support while still polyfilling the others. Then eventually all browsers support it natively.
The initial asm.js design was a little different because as a subset of JS, the polyfill was a no-op. But the binary form of WASM, I think, required a JS polyfill on browsers that didn't support it natively.
Dart could have taken a similar path, but we didn't get the level of user excitement required to motivate browsers to follow along.
Hi Bob - quick comment in this. The comparable to DartVM is not WebAssembly but PNaCl, which was Chrome-only. Wasm was much easier because asm.js proved the concept source-compatibly and in multiple browsers. I spoke to Anders Hejlsberg, Steve Lucco, and others at MS in fall 2013 and they got on board, even using OdinMonkey code licensed under ASL2 by Mozilla to overcome MS objections to the MPL.
DartVM like PNaCl was trying for too much in Chrome, exceeding what other browsers could afford to embrace at high direct and opportunity costs in a competitive post-Chrome browser market. A spec would take many years and multiple competing implementations to forge. Code is spec with such big single-company-grown projects.
My previous question in 2016 was "Reconcile Dart and Go", the best answer I got was: """This isn't true; they're both general purpose programming languages with strong static type systems and decent async I/O stories. You can write a web app or a server in either language, though Dart has a better ecosystem for frontend development, and Go has a better ecosystem for backend development.""" https://news.ycombinator.com/item?id=12131397
It would be great if you could give a new comparison (post-Dart-pivot) as to how you differentiate between Dart and Haxe when both (apparently) have similar goals.
In general, I dislike existential questions about programming languages. No one seems to do that for other product categories. "Why should Monet and Renoir both exist?" "We already have Castlevania. Why do we need Metroid?"
I understand languages are a little different because each requires an ecosystem that is fueled by programmer attention. There is an opportunity cost that time developer X spends writing a library for language Y is time not spent writing a library for language Z.
But, overall, I try not to get too hung up on that. I find thinking of things in zero-sum terms stressful and often not very useful. There are enough programmers to support a wide variety of languages. If we're worried about not having enough total nerd capital to support all of those ecosystems, I think there are plenty of other inefficiencies we could focus one. Eliminating the 7,000+ different Node libraries for doing asynchrony would probably free up a few person-decades of effort.
In terms of mindshare, ecosystem, and real usage, it's a failure (when compared to popular languages).
Regardless of how good a language it is, typescript fits in a similar space and is much more popular, has a more vibrant ecosystem, and is easier to migrate to.
One thing that allowed TypeScript to succeed is that it was never proposed as a native browser language and was a transpiled language from the start. Once it came out that Google wanted to make Dart a native browser language where it would likely displace the warty Ecmascript, Eich immediately railed against it and said Mozilla wouldn't support it, limiting what could be done in Dart without resulting in code bloat during transpilation.
Flutter is relatively new compared to when Dart was created, Dart initial goal was to be a language for web programming, outside of google, I think almost everyone agrees it failed at that.
They have change their mind many times about the design/future of the language, the churn is still experienced today, see Dart 2.0 (not released as stable yet), which some would say its a new language from Dart 1.0.
Initially Dart was ( another ) wet dream from Google that would take over the world by overthrowing Javascript and conquer the Web. It would do what TypeScript does and much more, so by that metric is a complete failure.
They gave it another life with Flutter and it looks like it can be a big deal this time.
I have got to know a few small teams in my local area, developing in niche languages (including Haxe and Dart), and they do not talk about it on forums or on meetups. They are busy developing their product, and the platform's the quality and features help them, the performance is good, and they don't care what the rest of the world thinks with the current hype trends that would fade soon anyway.
I lost track how many times I've heard my friends cursing JS and even TypeScript, and telling me they wish they could use a sane language instead. Well, you actually can, and I believe should.
Maybe it is just me but Javascript is not my favourite dynamic language. Given project like ReasonML and other languages that have type inference prototyping is not harder than without types. Is it a valid argument that it slows you down? Not sure.
I was not happy having to do TypeScript on a project thinking in your point about slowing you down which initially is completely true if you are forced to tslint it to hell. However, I've done a 180 in that TypeScript is very nice when you relax the tslinter and you sprinkle it on as you go which gives you that extra feeling of being auto-guarded when its just more practical but without getting in your way which is exactly what Ryan was saying in his Deno approach. I've saved loads of run-time errors from VSCode knowing types ahead of time for example.
Basically you turn off as many settings as possible that make things impractical. Its more of how you work, not a catch-all. I have VSCode and installed a supporting library to work with it. Basically it tells me what's wrong and how to disabled it as well. I either inline disable it or go to the specific tslint rule and disable globally. It is most difficult when you have to use another person's biolerplate that was built without typescript AND tslint is turned all the way up.
To have nice prototyping & development I use it more like typechecker than compiler.
I set "transpileOnly" for tsc loader and set different more strict command like "npm typecheck" to "tsc --no-emit ./src/index.ts". I usually add only strict null checks and it's enough for me. I also disable most of tslint as it's way too heavy in standard CRA app for example.
This way you get hints from tsc in editor so you see what is wrong but at the same time you can run your app without fixing everything upfront.
at least some research into types and productivity showed that the highest productivity was when function signatures have to have types but type inference exists inside functions (like auto in c++, var in C#)
For security I make Apparmor profiles for each script. I think the module system and NPM is what made Node.JS popular. And personally I like function passing style aka. callbacks, Promises and async/await looks more terse but is actually more complicated and prone to errors. I also don't like that TypeScript extend the JavaScript language and ads a compilation step to it, it's much better to add doctype like comments and you would get the best of both worlds, although I don't think type-checking is needed if you already do testing. For me static typing is mainly for performance, like in Dart, you can't simple make JavaScript more performant without it. With TypeScript you have "performance optimization" but without the performance benefit. If your code needs type annotations for others to understand what it does you need to use better names. The type annotations are for the compiler. Auto-complete and parameter hinting can be done via inference. And public parameters and methods should have documentation.
Interesting that he's unsure about Go. Would be nice to hear about why.
One huge strength about Node (&Deno) is having the same language and tools on the front end and back end. Its a huge benefit to have a team on one language, even if it might not be the optimal choice. I'm not sure if that is the problem he had with Go though.
I've never understood why this is a benefit when the one language is - let's say - suboptimal in many ways.
The interfaces between server- and client-side code should be well-defined and language-independent. You don't want the same people writing both, because it's harder to check that your API is working to spec if it doesn't get fully independent testing.
There's also a lot of useful server-side optimisation and security management that Node - or a high-level replacement - can't handle.
V8 may improve things, but it's going to have to improve performance a lot to be competitive.
You don't see how having to know only one language, is easier/faster than having to know two languages? A lot of people are doing both front and backend work. Often times it's a single person running the show. I'll tell you right now I use node for ALL my web projects, because I only have to focus on a single language. When I wear every hat managing the domain, server, databases, mail, user questions, and everything else involved in running a modern web app, one less thing to learn sounds great.
When the only thing you have is a hammer, everything looks like a nail.
You’d be surprised at how well screwdrivers work on those funny-looking nails with the threads and the slotted heads.
Snark aside: learning a second language is not really that hard, and will make you a better user of the first language in ways that you couldn’t possibly have predicted.
I know plenty of languages, it doesn't mean I want to use more than one for a single project. I don't know what the argument is here. It's not like we're discussing why some no name language is being used, Javascript is prolific to say the least. It's backed by two of the biggest companies on the planet. As much as some wish it would, it's not going anywhere.
The argument is, "just because some of the biggest companies on the planet have started equipping their carpenters with specialized screw-hammers, doesn't mean that they wouldn't be better off using the right tool for the right job".
For my part, I don't want it to go anywhere. Javascript powers some of the most interesting and exciting things in the world of software right now - but that doesn't mean that it's the best solution for every problem.
I don't see it. Any old idiot can learn a programming language. The difficulty doesn't come from the language, it comes from the environment and tooling.
Then the environment and tooling sucks. This is a large part of what Ryan Dahl tried to solve with Node, and I think he got a lot of it right, leading to immediate popularity of the project.
Most of the "infrastructure" stuff around deployment and, obviously, the JS ecosystem with all of its build tools and dependency explosions is still a mess, but it's important to keep eyes on the goal of getting rid of incidental complexity. Early Node did that well, modern typical Node development is of course another matter.
Once you get rid of as much accidental complexity in the environment as you can, having one less language to use is a huge win.
You completely missed the point. My point isn't that node is better or worse than any of them, my point is that if you're starting from scratch with no back-end knowledge, then the benefit of already knowing the language you're going to use on the back-end is dwarfed so heavily by the other stuff you have to learn that it's inconsequential.
Half my Rails apps end up with JS code executed server-side by TheRubyRacer. It's odd, but significantly better than having to keep duplicate logic in sync, some written in Ruby and some in JS.
But in what specific way way does it make it simpler? What are the tangible benefits? I've been working in a Node/React environment for the last 2 years and there is virtually no overlap between the code we use on front- vs backend projects.
So much this. Although in theory I can see why this would be appealing, however in practice, our Node backend and Ember frontend have very little code they can share. Maybe there's an argument for devs being able to jump back and forth. But again, in practice, the paradigms are very different so I am not sure there's a huge benefit.
Quite a bit of the server & client share code. Card handling logic, deck code handling in etgutil, eventually I'd like to move the game engine to being serverside, user management (easy optimistic protocol handling), svg rendering was used on both until recently due to Chrome having a buggy svg renderer
Because not everyone is using React type frameworks on the frontend? There is old fashioned javascript still, and because I know it on the front end, I know it on the backend.
But I know lots of languages, and just because I'm using JS on the frontend doesn't mean that's the best backend choice.
Whether I'm using React or Angular or Ember or Vue or jquery or vanilla JS on the frontend doesn't have any impact on the code I'm writing on the backend. Because one is concerned primarily with handling UI, and one is primarily for data access. There's just naturally not a lot of overlap between those.
That's awesome you know lots of languages, well done. It's not necessarily about what's the best backend choice, it's about getting shit done as efficiently as possible. For the vast majority of projects, you're never going to reach any type of limitation in Node performance, that you may or may not be suggesting. I'm not sure why you're so hung up on the front impacting the back, it impacts MY workflow. I don't have to switch gears, and look up documentation on how to do x y or z in this language, when I already know it in this language. Do you use arrays? Variables? Any basic programming structures? There is plenty of overlap.
I'm not talking about overlap in simple language features, I'm talking about actual code reuse. It's often touted as the reason to use JS on front and backend, but it's a pipe dream. It almost never works in practice.
If I'm trying to get things done as efficiently as possible, I'm not using Node because the ecosystem is pretty terrible compared to other languages like Ruby and Python. Sure there are lots of libraries available in npm, but I've found that a lot of them are janky or hard to use, missing features, poorly documented, incompatible with each other, etc.
I'm not sure where it's touted code reuse is the reason to use the same language on both sides. The reason is because it's the same language, and you're not going back and forth between different ones.
> > The interfaces between server- and client-side code should be well-defined and language-independent.
> They dont have to be and that is the point. If you write JS/TS front and back you can make everything much simpler with the associated benefits.
That post implies that if you're writing in the same language on the front and back ends, it not only makes writing that language easier, but also somehow makes it easier to marry the two. Since most web services operate through some sort of HTTP interface, which is language agnostic, I can only infer that that poster somehow meant code reuse.
So there's that, plus I've seen countless times before over the years where people explicitly tout code reuse as the primary benefit to using JS on the backend. I'm not buying any of it.
You use what you want, I'll use what I want. No sense even arguing about it. In the end, users don't give a shit what made what, as long as they get something useful from it.
Meh, forget about code reuse, its about using only one language for a full stack web app and knowing it well.
I really don't see how python or ruby is any better than nodejs (its more a matter of taste and familiarity and you don't sound that familiar with nodejs/js in general), if anything, nodejs because of v8 is light years ahead of the ruby and python official implementations, and all three communities have lots of bad and good libraries.
It is very useful if you want to make your devs not client or server but both. This works well in many organizations that i have seen: developer is responsible to build feature from ground to top - this speeds up things a lot.
Having the same language allows to share some code between frontend and backend. This can be used to prerender your SPA on the server, for example, or to share some logic with the client to enable offline usage.
Being able to share code is really not that great because you don't share that much code between front and back in reality. Being able to share the paradigms, structure, mindset and tooling (like linters, formatters, code generators, packagers, whatever..) is what's awesome. You remove a whole lot of context switches and cognitive dissonance, smoothing the train of thought which greatly eases the expression of ideas.
Source: used quite a bunch of golang + gopherjs, ruby + opal, js + node
Depends a great deal on what the application is, I think. If you're doing heavy lifting (or anything a bit complicated) in the client and then need to make similar functionality available to APIs offline it's a godsend.
My current hobby project is a kind of REPL/programming language that does all of its compute in the client, and being able to hit a button and export a "serverless" microservice is lovely side-effect of being able to run JS on the server. It isn't exactly "the backend" to my application, more "a headless client exposed to the net," though, I guess.
(As someone new to JS, the frigging Node ".mjs" fiasco almost derailed the whole thing, though... What were they thinking?)
Edit: oh, and "optimistic rendering" is also a big win -- if you can show the result of an action before it has persisted (or preview it before the user has done it) shared code can also come in handy. Though perhaps we're too scared of network round-trip times.
It depends. I was working on a websockets based game using node.js on the backend and React, redux & canvas on the front end. As my game grew, more and more code went into the /shared/ folder. It's extremely useful to be able to mirror the game logic that's run on the server for security in the front end for convenience and performance (avoid waiting for round trips for everything).
For example, I had a standard RPG style combat system. Being able to share all of the code that does calculations for combat allows you to put those calculations into the UI as much as you want.
I think standard CRUD web apps are different because there's much more asymmetry in the responsibilities of the back end and front end.
I'm working with a system that lets you develop code in the client, then gradually move it further back in the stack as needed. It's incredibly useful for testing.
I've never looked into server-side react, can you explain a bit about the requirements? Does the server keep a lot of session data around to maintain a shadow DOM for each connected client? How far is it in practice on the continuum between a fancy templating language and a full app framework? Does anyone put server-side React on server-side Redux?
Whatever the case, if people write clients in react to talk to Ruby backends, can't they do the same to get "server-side react clients" that run in front of their backend app? Or does that go against the programming model somehow? Would it enforce too much of a barrier between "the view" and the model than is traditional these days?
While I haven't done SSR myself, my understanding of the general process is that it's about rendering that first initial page that's sent to the browser. This is most beneficial for speed of initial load and SEO purposes.
An SSR React app that uses Redux would probably create a unique Redux store instance for each new client, dispatch just enough actions to fill in whatever data is needed for the initial render, and then serialize the Redux store state to the host HTML page so that it can be used to re-hydrate the Redux store on the client.
For more info, see the "React Server Rendering" [0] section of my React/Redux links list, and the "Server Rendering" recipe [1] in the Redux docs.
The repo itself says it's "Segfaulty," which is far less likely in safe Rust, for example. Perhaps that's why he mentioned it as a candidate. Regardless, it sounds like he was doing a lot of work in Go, so perhaps it was just the too that was closest to hand for him when he decided to build the prototype.
It might have to do with the overhead of calling into C/C++. Since a project like deno will have to do a lot of interfacing with V8, a language like Rust wouldn't have the overhead of cgo when dealing with the embedded VM. Also, while go routines are one of the strengths of go, they don't necessarily play well with concurrency constructs in other languages. Since a project like deno will probably be keeping Node's single-threaded, evented model of concurrency and using libuv (also a C interface), that basically means avoiding go routines.
To your point, though, you'd still use basically the same language on the server and client, since deno runs typescript and typescript can compile to run in a browser. The Go side would take the place of Node's native modules, which currently have a C++ interface.
the point of Go is to have one language that's the best of both dynamic and static languages. the point of Rust is to be safe and fast. if you're building an infrastructure where you're externalizing the dynamic language runtime, such that there's always a host and a guest language, Go isn't necessarily optimized to act as a host language in that context. Rust is, at the expense of being arguably more tedious to program.
He isn't (wouldn't be) using Rust for a dynamic runtime. It would be strictly for bootstrapping V8 and a ffi. With the ffi, you interop with whatever net stack you want.
I think it would be a real shame if the module system design made it impossible to statically enumerate all the module's reachable from a program entry-point without executing the program ...
much respect to ryan for coming out like this. i used to think that he might have thought node was the bee’s knees and that callbacks and promises were sent from the gods. glad to hear him confirm what a lot of js devs are feeling right now with the wretchedness of the js callbacks, promises, and generators. there is a better way and im glad he is trying to do something about it.
It is useful for those things. I implemented a fairly involved scripting system that we now use inside all of our server instances. Each script in the system is an entry file mapping to an alias command and the entry file is invoked from the top level. To get the benefits of async/await, we write the logic of each entry script under main = async () => do_logic(), and call it at the end of the file as main().
I just don't see how this workaround is the correct solution. Just saying that's it's generally a bad idea is not enough for me. At some point there will come a time when you want to treat a file as a function, and when that day comes you will want async/await at the top level.
It's nice to see some recognition of the idea that interpreters are safe by default (excluding infinite loops/OOM), and we can avoid many security concerns (access to files/network/etc.) by simply not including that functionality in the interpreter (unless opted in via a startup parameter, as described).
I'm also a fan of using env vars for configuration and locating dependencies, as mentioned in the talk. Much simpler and easily extensible compared to e.g. vendored directories (requires messing with the contents of the source directory, which requires write access and breaks hashes, etc.), hard-coded system paths like /usr or ~/.some-default-location (causes conflicts when running multiple incompatible versions), etc. A simple env var of paths, e.g. colon-separated, maybe with some sane quoting convention, can be used for all of those if desired, whilst making it super easy to extend-with or restrict-to any other location(s) instead. It's also trivial to "bake in" an env var to an application, just call `PACKAGE_PATH=foo:bar my_app` rather than `my_app` (or make a one-line wrapper script).
I agree with others that importing from URLs seems like a bad idea: network I/O is one of the least reliable actions we can take, which would make importing far more complicated than necessary (what if we're offline? should we follow redirects? should we check for proxy settings? how should we report errors? etc.). All of this complexity and the inescapable problems of network failures are completely avoidable by just downloading things up-front. Package managers/build tools can fetch whatever they like (URLs, git repos, etc.), however they like (with proxies, caches, etc.), to wherever they like (one big cache, project-specific vendor dirs, whatever), and just stick the resulting directory (possibly of symlinks) in the program's environment (e.g. via a wrapper script, as above).
URLs can be relative and absolute paths like HTML supports, they don't have to hit the network. This still leaves room for a package manager, but it has to put packages in known places, or resolve imports at install time.
Sure thing. How do you get such a JSON value into an application though?
What I'm saying is to use env vars. You could put your JSON straight into an env var, or if you want a persistent JSON file on disk then put the path to that file in an env var.
Although one of the more interesting "talks" I ever heard him give was a discussion on promises. I was lucky enough to be in (the ridiculously long) line behind him waiting for food at NodeConf in 2012 and he and an engineer from Microsoft had a pretty spirited discussion that explored the subject in way more depth than I had previously thought possible. Ironically, given that he now considers the removal of promises to be a mistake, it was the engineer from Microsoft who took the pro-promises side of that argument.
He buried the lede. The talk highlights some regrets, but the most interesting part is a discussion about a new framework he's building based on Go and TypeScript: denohttps://github.com/ry/deno
I'm glad to hear he (and the Node community in general) has come back around to promises.
I remember the very early version of Node.js that had them (looks like they were added in v0.1.0 and removed in v0.1.30), although they weren't true chainable promises we have now.
There are legacy code that assume that calling `readdir` will yield undefined, and will have just passed that result to a function, that alters its behaviour based on whether a parameter is undefined.
Just how easy JavaScript, like lisp, makes it to pass around pure anonymous functions and make good use of closures (if you are willing to tolerate a few brackets). Promises had to ruin it by wrapping functions into stateful objects.
Really the best case I found against Promises comes from, believe it or not, the pro promise chapter of a book:
I really don't understand how the author managed to view the things he described as positives.
Just the length of the chapter is a testament to how overly complex promises are. On top of bringing in a ton of jargon such as "thenable", "rejection", "fullfilment", "future value" (I think the author means you are guaranteed a nextTick whooptidoo), "uninversion of control", "revealing constructor", it admits that promises do not solve most of the issues mentioned about callbacks in the previous chapter and often makes things fail in more subtle ways.
It describes how you have to manually call things like Promise.race() to solve some of the problems hardly making things more automatic then callbacks.
He talks about "Thenable Duck Typing" saying horrifying things such as
"Given that Promises are constructed by the new Promise(..) syntax, you might think that p instanceof Promise would be an acceptable check. But unfortunately, there are a number of reasons that's not totally sufficient.
Mainly, you can receive a Promise value from another browser window (iframe, etc.), which would have its own Promise different from the one in the current window/frame, and that check would fail to identify the Promise instance.
Moreover, a library or framework may choose to vend its own Promises and not use the native ES6 Promise implementation to do so. "
and
"The standards decision to hijack the previously nonreserved -- and completely general-purpose sounding -- then property name means that no value (or any of its delegates), either past, present, or future, can have a then(..) function present, either on purpose or by accident, or that value will be confused for a thenable in Promises systems, which will probably create bugs that are really hard to track down."
He praises immutability but ignores the fact that promises add mutable hidden state between your call and callback.
He mentions that promises can silently swallow errors and calls it the "Pit of Despair". There is much more. Read the chapter.
I personally prefer promises over callbacks, but kudos to your compelling reasoning. Promises are indeed a beast to tame, they do make debugging very difficult among other things.
I think the ecosystem evolved to just agree we should move as fast as possible to using async/await and be done with callbacks AND promises.
Let's not forget node.js wasn't created in a vacuum but was based on CommonJS (module format and std lib) also implemented by TeaJs/v8cgi, Helma, and many others [1]. In fact, server-side JavaScript was a thing as early as 1999 or before (Netscape Server). That it's based on a highly portable language also used on the browser is what made it attractive over alternatives for me back in 2012 or so.
Try "right-arrow" (you can keep it pressed). Sounds stupid but it used to be an issue for me and now I can watch lengthy videos in a minute, a bit like how one can go through a book quickly by flipping pages and going back to the interesting bits.
"Ryan: Yeah, I think it’s… for a particular class of application, which is like, if you’re building a server, I can’t imagine using anything other than Go"
It's wild to see this after having been around in the early days of Node, and now having also moved most of my attention to Go as well.
Ryan has always had, IMHO, excellent taste and good instincts and guts to make things simpler. A lot of the accidental complexity in our industry persists simply because people tolerate it, and it's great to see that he hasn't lost his fire in that area. Looking forward to see more about Deno.
The point about promises was an interesting one for me personally since I was one of the people at the time who argued in whatever small way for taking promises out, in the hopes that the language community would come up with something better.
I have mixed opinions on the topic now, but it's interesting to speculate about what might have been. The Node.js ecosystem was weakened by having different ways of handling the async question, and by a lot of developers not knowing the best practices in using callbacks effectively and leading to "callback hell", which is totally unnecessary. It's possible that having promises baked into an early Node would have constrained or even fragmented what we ended up with in the language, and that would have been worse. I'm still a little disappointed that we didn't end up with anything more elegant than promises in the language itself.
It's interesting to compare package management in Node and Go. NPM got the early adoption and became the de-facto package manager at a time when JS had no such thing. In Go, the package manager question has been unsettled for a much longer time and there are more chances to experiment. Package management is simply difficult, and it seems impossible to design a good programming language and resolve the package management questions at the same time. It's sad to see some of the criticism against NPM... it's much easier to criticize than to build a better system.
It's interesting to hear him say that npm and node_modules are regrets since lots of complaints about Go packaging from people new to Go ask for something similar...
yarn just seems like a set of incremental improvements over npm? Which is great, we're all for improvements. However, the vehement complaints I've seen about npm [0] make it seem that nothing short of a complete re-architecture could be tolerated. yarn does not seem to be that.
I don't agree with those complaints, but I do agree with you that "some parts" are really good. node really figured out the right search strategy for an unscoped import. (e.g. "require 'foo';" rather than "require './bar/foo';") Just look for a directory with the name "node_modules". If you don't find it, go up one directory and try again. So simple! So predictable! So complete! It works so well, all manner of "left-pad" abominations can be supported. Any other system should think very carefully before using a different import search strategy.
[0] with the exception of those related to path depth: I think those are resolved now? I wouldn't know because I stay on an OS that doesn't go out of its way to frustrate me.
I don't understand the rationale behind using V8 for server code. Yes V8 is a general purpose JavaScript engine but ultimately all of the performance trade-offs and design decisions are made with browsers as the optimization target.
It sounds like Ryan is still interested in making V8 work so I have to ask: why do you want to be writing server code on a client browser engine?
Because that's the best existing engine (even with Zilla nicely catching up), with dozen of genius and millions of dollars dedicated to it since the beginning and for the next years to come.
Why do you think JS became so much faster compared to the anemic octopus is was before ?
And can you imagine the perf of Ruby or Python if a 10th of those resources were allocated to those ?
"The protocol for front-end / back-end communication, as well as between the back-end and plug-ins, is based on simple JSON messages. I considered binary formats, but the actual improvement in performance would be completely in the noise. Using JSON considerably lowers friction for developing plug-ins, as it’s available out of the box for most modern languages, and there are plenty of the libraries available for the other ones."
"Performance" in a text editor means latency, and it's not going to matter there when the text editor also has to edit the text, update the display, etc.
"Performance" in a server also means throughput. There's a reason why protobufs exist and it is all about performance (and type safety which is related) and there's no way JSON serialize/deserialize overhead, in both directions every time, is going to be a good choice here.
Ryan rightly warns about adding “cute” and unnecessary features to projects. Then he goes and adds the “Load module directly from a URL” feature with all its complexity to Deno.
Ryan; kill that feature now. It’s not needed. It’s just cute.
One-line jokes are generally not well-received on HN unless they are so exceptionally amusing and original that they are beyond reproach. The standard rationale for this - which can make HN seem very dry and humourless at times - is that people want to avoid HN becoming like Reddit.
thanks for answering. but thanks to the parent comment i learned about a new (albiet widely recognized) author. thats useful because we all come to hn to learn things. i suspect most downvoters probably dont even understand the reference and just see a joke qua joke. boo.
He mostly regrets the way Promises work (edit: that the async programming APIs don't work with them), security/sandboxing, and the module system.
Then he introduces "Deno", a successor which is under construction and which will fix all these problems. It exposes a language based on TypeScript. Internally, some parts of Deno are written in GoLang.
i could see nested callbacks favoring two spaces, but overall it's not a style unique to javascript.
for example, a lot of ruby i've seen (admittedly it's been a while) used two spaces, and i don't recall it being any more prone to deeply nested code than other languages.
that said js code tends to be rendered in a variety of other places like browser dev tools, which may influence this as well.
Javascript has a lot going for it from async-everything to conveniences like destructuring. And it works in the browser and has things like Typescript.
I have to find good reasons to use another dynamically-typed language over Javascript.
I would argue async-everything the biggest hassle with Javascript. Most of the time, I need things to run synchronously. Do this thing, then do that thing based on the result of the first thing. The need for async is the exception. So what we end up doing is expending extra effort forcing all the async stuff to run synchronously when that should be the default case.
I get what you're saying, and yes, async/await is much better. But the entire reason it has to exist (and be used everywhere) is because javascript's default async behavior gets in the way so often. The irony is that when I do want some sort of async behavior, I often have to reach for something like bluebird anyway because I want better control over the Promises. So either I'm circumventing JS's default behavior by using async/await, or using a library that makes promises manageable. Almost never is it preferable to me to use JS's standard plain async behavior.
Thank you for summarizing this so succinctly. The true use case for async in the browser is rare. Sure, you're not blocking the UI while making server calls, but how often do you need to click around while waiting for something on the server to happen?
TBH I felt like a lot of this could have been resolved if he stayed involved with the community and opened issues with the relevant repos.
Seems pretty classless to rage quit the community, brag about how much better Go is, then mark your return at a JS conference by shitting on Node and hard-forking the server-side JS community.
OP was incredibly immature and the resulting discussion was worthless.
> I do not need any help from you thanks. This is aimed towards Ryan
Yikes.
Also, calling everything a horrible miscarriage is not a discussion.
Pointing out the problems with something is easy. Anyone can do it. What's hard is evaluating problems in relation to the trade-offs that were made. Only amateurs think everything is "horrible" as if there are never trade-offs.
I agree with my sibling: it's only good example of worthless Github issue "conversation".
The community. Specifically that someone thought it was appropriate to post an issue shitting on someone's old project in that person's new project's issue tracker.
It's also kind of pathological when someone focuses on just one technical aspect of software, like performance, or security, completely unable to reason within a large context of people doing things for other people.
The original post could have been constructive. It's at least a good possibility that a new thing can be developed based on criticism of an old thing, and some of the author's points were just building on what Dahl said, so it could have been a useful discussion. I don't agree with another comment in this thread that it's easy to (succinctly and with depth) point out what's wrong. The person who initially responded to the issue could have included these possibilities rather than playing full-on cop. These systems are still largely organic. But the original poster's replies are hostile and in light of the "radio silence" request mentioned later in the thread, it wasn't ultimately a good post.
"And then there were people adding modules. I thought to myself, this projet is done now. So wrong."
Seriously why are we as a community still entertaining those outlandish statements and attitudes? If you heard that sentence as a non-tech person, you'd swear modules is a computer virus or something, that it's so inherently bad that it's not even worth discussing.
Props to Ryan for his amazing work, but come on let's try to be civil and respectful here.
Thats not what he meant. He was not saying "People are adding modules, modules are bad, this project is ruined, [modules are] so wrong." He was saying, "People are creating modules, people are building on top of my work, this project is COMPLETE. [I was] so wrong."
I'm glad you elaborated on this because I also interpreted it as an overly negative attitude towards modules. He was definitely a little nervous while talking!
i'm not sure what you mean by "outlandish statements and attitudes".
when he said "this project is done now" my takeaway was that he didn't anticipate the popularity of node. and "so wrong" was him acknowledging that he was very obviously mistaken.
It's very popular of course, so I'm definitely not arguing that metric. However, the stuff that was originally called examples of tooling that exhibits unneeded bloat and complexity (Maven) is now reimplemented in Javascript, but poorly (npm).