Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript in 2015 (glenmaddern.com)
417 points by geelen on Jan 7, 2015 | hide | past | favorite | 125 comments



God, he makes it look good. The demo does give me a few of the ol' "too much magic" heebie jeebies, but frontend JS development is so badly lacking in compelling packaage management stories that I'm going to give jspm a shot.

Here's to 2015, I suppose.

P.S. Glen, if you're reading this, I really would love to see a youtube of a DJ hooked up to automatically synced GIFs. Don't be a tease.


npm is not just a server side package manager. It handles package management for the front-end very well and has for a long time, learn more here -- http://browserify.org/.


I'm a bit torn over using npm this way. On one hand, being able to use tools already built in Node land such as the EventEmitter is nice. On the other hand, having to use Browserify and navigating the mess of node modules is a pita. If the dependency structure with Node modules for npm dependencies were handled better, I would likely be all for it, but it is a major negative for me, especially if a quick project starts off as a mix of a Node server and contains a moderate amount of frontend heavy logic.


I happily use Webpack and NPM and have no problems with it.

NPM is in fact fine for frontend package management, it's not just about being able to use "Node land tools". All my frontend deps are on NPM, I published some NPM packages myself, and I don't even use Node.

Yes there are small issues (like wanting to be able to specify non-dupe requirement for some libraries) but overall NPM works fine for frontend. Much better than Bower if your app is moderately complex.


Agreed. I never saw the point of Bower, since it requires Node itself anyway. With tools like Browserify (and, I assume, Webpack... haven't had a chance to use it yet), you can load dependencies into your project extremely easily: `npm install backbone --save-dev`, then `var Backbone = require('backbone');` in your code.


Do you have time to hear about our lord and saviour browserify ?


I thought jspm looked like a direct browserify competitor/replacement. Is that an incorrect interpretation?


I am not sure familiar with jspm or bower. In the limited 60 second window that i researched their site it seems jsmp doesn't have as much documentation as broswerify. If jspm comes up more often in npm then I will think about it.


'I really would love to see a youtube of a DJ hooked up to automatically synced GIFs. ' Yes, this please !


( and if not, an invite to your next rooftop party would probably suffice )


If you have new projects you can try this out on with out too much risk, then please do give this a go and report/blog about it!

(I, too, am extremely dissatisfied with the current state of ES5-based JS modularity. Frankly it's incredible that anyone can get anything done at all given the state of tooling for modularity/building/deployment ATM.)


For package management why not use Bower ?


Bower is great if you need to install jQuery or one of its plugins, or another very popular JS framework.

When you get into smaller modules, persistence, recursive or deep dependency trees, versioning, automated bundling (i.e. no manual "config" files), etc. then bower is not a strong choice.

A good writeup here: https://github.com/bionode/bionode/issues/9#issuecomment-495...



My biggest issue with the recent additions to the language is that there's now a thousand different ways to do the same thing.

Iteration:

  for (var i = 0; i < y.length; i++) { ... }
  for (var x in y) { ... }
  for (var x of y) { ... }
  y.forEach(function(x, i) { ... })
  Object.keys(y).forEach(function(x) { ... })
Comparison:

  ==
  ===
  Object.is() (would have been a good laugh if introduced as ==== instead)
Of course, this doesn't matter much if you're a single developer. I've started writing a bit of ES6/ES7 and it's pretty cool. But it's going to be a PITA for projects built by many developers of varying experience levels. The nice things about smaller languages is that there's often only one way to do something, so when you write code or review other people's code, your mind is free from the minutiae and you can focus on the big picture instead.

It's a bit funny that it's when JS is, from the general consensus, finally getting "better" that I'm actually considering more and more switching to a small but well-built compile-to-JS language. I guess smallness and simplicity just matter a lot to me.


Object.is is a very silly addition to the language. It does the same thing as === except in the case of NaN and positive/negative zero.

I mean if you read a polyfill for it, it's such a silly bit of "functionality". And of course the name is terrible. Argh.


The new Map and WeakMap classes use Object.is() to determine if two keys are the same (otherwise it would be impossible to use NaN as a key to a map).

Whether this algorithm should have been exposed to users is debatable, but it exists for a good reason.


I would think that most languages suffer from this at least as much as JavaScript. The solution is to have guidelines and enforce them through code reviews. Linters can also catch some of the rules.

I'd say that JavaScript's benefit is that it's so simple that there are not too many solutions to do the same thing, unlike massive enterprise languages like C# and Java.


Just looking at your example:

    for (var i = 0; i < y.length; i++) { ... }
    for (var x in y) { ... }
    for (var x of y) { ... }
    y.forEach(function(x, i) { ... })
    Object.keys(y).forEach(function(x) { ... })
None of the "for" variations are considered good practice in ES6. You should be using "let" (or "const" if it's allowed here) to avoid var-hoisting of "i".

Personally, I'd advocate using "for" if you have a need for early return/break/continue -- otherwise I'd go for the first forEach() variant. Or, even better, use "map" and avoid side effects in the function you're mapping over the collection. Unless of course you're doing something imperative.

The fact that the last forEach() variant is possible is a good thing, though I wouldn't recommend its use in this case because it's needlessly complex -- it shows that the language/stdlib is becoming more compositional.


Yes, "let" is better than "var". I could also have used a fat arrow in the forEach(). But my point was to list iteration variations, so outside of that I wrote traditional ES5.

This illustrates the issue though. "var" is like "let" but without block scoping, so you should almost never use "var", but it's still there to trip newcomers. The fat arrow is like the "function" keyword and most of the time you can use them interchangeably, but if you rely on "this" they're not interchangeable anymore.

This growing laundry list isn't exactly thrilling. I'm glad to have map(), filter(), every() and friends, though.


Have a pre-commit linter disallow "var". (etc. for everything else.)

In a language like JS you cannot have it all, but at least appreciate the improvements! ;)


I am unsure about some ES6 additions.

Thanks to Crockford we got a decent ES5. Remember that several syntax changes got postpones to ES6. And don't forget about "E4X", a beast that was supposed to be JavaScript 2? http://www.ecma-international.org/publications/standards/Ecm... It got similar traction as XHTML 2. Both had no backwards compatibility - an insane idea. Some new features in ES6 look like Crockford "good parts" movement lost and Sun/Oracle Java evangelists won.

Hopefully Douglas Crockford updates his "JavaScript: The Good Parts" in time for JavaScript 6.


I really wish E4X had gotten traction. I wrote a firefox extension using it and it was awesome to do XUL + JS with it.

Years later

  var foo = <foo>{item}</foo>; 
is the new hotness in facebook's JSX.


AFAIUI from the React people E4X had a lot of incidental complexity and extraneous stuff relative to JSX. So there's that.

I'd argue that with ES6, JSX could just reserve the "jsx" prefix for interpolated strings and go with

   jsx`<foo>blah</foo>`
but that's a typical hindsight-is-20/20-thing.


This blog post - "JSX: E4X The Good Parts" - covers what's in and out for JSX:

http://blog.vjeux.com/2013/javascript/jsx-e4x-the-good-parts...


> My biggest issue with the recent additions to the language is that there's now a thousand different ways to do the same thing

To be fair though, this has been an issue with Javascript since its creation (and has been getting worse as the language has been expanded while maintaining backwards-compatibility).

Many other languages have similar problems (Ruby is an offender that comes to mind, though it's perhaps not on the level of ES6+)

I'm not much of a polyglot, but one language which seems to have "one obvious way" as part of its design choices that springs to mind is Python. I have a hunch that pure-functional languages would be less choicy as well, though I have no familiarity with any.


What's interesting is that all of the 'niceness' seen is the result of a switch to the functional style.

Excluding the singleton class (which could have been a single function itself), you've got your map/filters in the gif processing, and encapsulation of async activities[0] through monads via promises.

Seems like JavaScript got good when it started embracing what functional programmers have been drowned out saying for years.

Looking good indeed.

[0] Async actions as a language-level construct, as opposed to a syntactic abstraction have always been A Bad Idea. Promises should have always been the default, and manually writing callbacks should have never been a thing.


Except for immutability. It's still nowhere near the norm.


I'm working on a JS stack that leverages React, es6, functional style programming, and immutable data. Check my profile for more info.


> Async actions as a language-level construct, as opposed to a syntactic abstraction has always been A Bad Idea.

Are you saying that the upcoming async/await features of JS are a bad idea? Or maybe I'm not following; can you give an example?


This:

    images <- get "http://example.com/images.json"
is objectively easier to reason about than:

    var images;
    get("http://example.com/images.json", (err, resp) =>
        if err throw err;
        images = resp;
even though the former might be internally implemented as the latter.

In addition, the former doesn't give the programmer the 'opportunity' to cause a race-condition, and encapsulates the failure entirely for you (automatically and defaultly); If `images` ends up erroring, and you use it again, then it'll short-circuit without crash, very similar to `then()`.


It's mostly easier to read because it no longer deals with error cases. All code becomes easier to read if you ignore errors.

I can make my code arbitrarily short if it doesn't have to be correct.


That's the magic of monads: the errors are all handled for you (in an encapsulated, lossless way), and you can deal with them (if you choose) at the end of the chain, just the same as promises (because promises are a monad).


The same can be said of exceptions.

However, my comment above had nothing to do with how you handle errors, it was about the unsoundness of comparing code that handles errors to code that doesn't.


Right - What I'm saying is that the above former code does handle the error, in the exact same way as the latter code.

For clarification, the error is handled implicitly (but you'd know what kind of error it was due to the type signature), but you can always handle it in manner you choose to at any point.


It's not really the same as exceptions in a typed language because it's forced to be delimited. Even in Javascript you'll have to do a little ceremony to "escape" the golden path driven by the monad, though you'll have many ways to take short-cuts and forget details.


Great screencast – lots of detail in a short period. No messing around, it also shows how an optimized workflow should look like. The audio/video quality of the screencast was extremely good as well.


What will SystemJS+JSPM give me over Webpack? Just curious.

Webpack also supports ES6 modules (as well as CJS and AMD) when used with 6to5 transpiler, and has a lot of great features like splitting code into bundles, figuring out the best way to split dependencies based on module size, and asynchronously loading missing chunks.


Haizz, I've just learned to use Webpack. Now comes SystemJS+JSPM. There are just too many new tools, workflows and different ways of doing things in Javascript world.


You don't have to try to learn everything. I know HR throws the kitchen sink into job listings, but it's okay to specialize in only one copy of a particular tool.


No, because SystemJS+JSPM does nothing to invalidate your Webpack knowledge. It's still an active project that works just fine for the needs of many developers.


This is a trite response. If it does something significantly better than it does do something to invalidate knowledge in a defacto sense, at least for the caring craftsman.

Not that there's any point complaining about it either though...


It's just not that hard. Try it and see if you like it.


In their current states, Webpack is for the production applications, JSPM for development / little ES6 apps, and SystemJS is used for Node.js.

- Webpack supports much more than just ES6 (and more people work on it), so you probably want to use it for production apps.

- JSPM allows you to load ES6 library by transpiling them on-the-fly, which is great for simplicity.

- SystemJS works by itself in a Node environment, you don't need JSPM for it.

Guy Bedford has made a great tool with JSPM.


This seems so fragmented, even before you consider all the other module loaders, transpilers and toolchains.

I'm a senior frontend developer and I find this side of JavaScript truly bewildering.


Agreed on there being too many tools. Thankfully most of them are starting to embrace the same core feature: writing npm modules. Small bits of code that you publish once to npm, and then reuse across many projects and potentially many build/workflow systems.


The main issue is really the lack of module concept in Javascript core. I hope things get better with ES6, because at this point it will be much, much easier to write a library to load them all, without relying on "proprietary" loaders.


It runs in the browser so less boilerplate to start a project, don't have to run a build daemon for every project or wait for rebuilds to finish, etc.


Aside: "A DJ using Ableton Live, a huge bundle of MaxMSP emitting a UDP stream of beat information (courtesy of the immensely pro Cade), a UDP ➝ WebSockets server, and DJGif pulling hundreds of GIFs off various Tumblrs to beatmatch <x-gif> on two projectors makes for a hell of a good show."

Does anyone know why he wouldn't have used the midi clock from Ableton (or other DJ software) to a "midi->websocket" server?


One reason might be that MIDI clock is just a repeating, identical message. It does not contain any position information (like where the next bar starts). To get positional information relating to beats and bars, you need to use MIDI Song Position Pointers, however these use a 14 bit counter which will typically overflow after about 10 minutes. In short, long-running musical clock sync over MIDI is problematic.

For syncing sound recorders, MIDI Time Code (MTC) is usually used, which works fine for up to 24 hours, but doesn't contain information about bars and beats, so unless the system receiving the MTC has a-priori knowledge of what time locations will correspond to bars and beats (which it typically wouldn't have in the context of a live performance), MTC can't be used for this purpose.


Sweet! Thanks for the detailed run down. As a DJ myself, at my residency we have a random splattering of strange movie clips that we just play and they are never beat matched but mentally something "always" hits a beat when you are watching it. I guess he's going for stuff actually changing in musical time.


The beat information might have been more elaborately derived from the audio, as opposed to just sending the basic tempo.


For anyone building projects utilizing GIFs, check out the GIPHY api.

http://api.giphy.com/

Here's another project using GIFs and beat matching:

http://www.seehearparty.com/

Some other cool projects:

http://giphy.com/labs


What advantages does this offer over the more mature ecosystem that surrounds browserify?


From what I can tell, it means you no longer need to write/maintain gulp scripts and you don't have a build step during development


That's assuming you want all your scripts to be loaded on the front end asynchronously which seems to be only for development. For production I think you'd still want to compile this all down for a faster load time. From the jspm page: "For production, use the jspm CLI tool to download packages locally, lock down versions and build into a bundle." So I think you're correct that there'd be no build step for development but you'd still have some sort of script being run, whether it be gulp or otherwise, that builds it for production use.


yep, the point though is that having a build step during dev adds some overhead (which can easily go into the several seconds range in my experience) into the type-save-reload cycle, which is exacerbated by hit-reload-before-build-finished-so-need-to-reload-again pattern.

My understanding is that this tool removes that entire class of annoyances, and gives a no-hassle live-reload ES6-enabled environment on top.


Substack's essay applies as much to this tool as it does to webpack:

https://gist.github.com/substack/68f8d502be42d5cd4942

Importantly, overloading require like this:

  var collections = require('npm:lodash-node/modern/collections');
  var $ = require('github:components/jquery');
... means that you can't publish this module to npm and have the require statements work as expected.


The build step is mitigated by incremental reloading, and the incremental reloading can be tied to a live-reload event.

So you end up with the same workflow as shown in this video; but most likely faster for many modules (browser requests 1 JS file rather than potentially hundreds) and also more realistic for production (aside from minification, the dev environment is the exact same as the production environment).


I think that's an okay compromise, especially if you typically have a CI process anyways.


You still need to maintain build scripts for production, though.

p.s. For prototyping like in the vid, you can use tools like beefy or wzrd to avoid setting up any browserify build step. :)


Who says you have to use gulp? Just use make.

https://github.com/williamcotton/makeify


Looks like an interesting workflow, but I think I still prefer using a compile-to-js language like CoffeeScript over the use of shims. I know it's a silly thought, but to me, shims sort of violate the separation of responsibilities between the user and the developer. If I, as the developer, want to write more concise, scalable, and convenient code, it should be my responsibility to spend my development resources to convert that code to something the user's browser can understand.


I would love to hear more about djgif


There is a link to the repo in the article which might be helpful.

https://github.com/geelen/djgif


Yes, please. A video, and explanation of how to get beat data from Abelton to the server, anything helps.


Really great screencast. If you have the good fortune to be a JS developer in 2015 (and beyond), there are some exciting times ahead!


I'm really late to the JS party, having just picked it up a month or so ago. node.js and Meteor are my current tools of choice; it's a rather super environment to get stuff done in.

It's nice to see JS get some positive press.


In fact, JavaScript is the one late to the party, having developed some rudimentary tools only recently. There's nothing exciting about redoing old mistakes in a new ecosystem.


Don't say things like that on HN. Say "Wow, amazing! I did not know this is possible in JS! JS is the future, other languages are not needed!"


Congratulations you've just unlocked the "Top Negative Statement in a Thread" achievement!


I'd be curious to see how a medium to large project (500+ files) works with this approach? I'm guessing it would take a while to load the page since the browser has to request each script?

The tool looks pretty good. I think the most important feature is working with existing modules on npm. :) This means that most browserifiable modules can be jspm'd and vice versa.


In production you would bundle all into one file, while in development most/all of your files will be cached by the browser. But to be honest I've never worked with 500+ in the browser.


Nothing to do with this, but I remember a conf about javascript in 2040++ or something can't find the video..



Gary Bernhardt is the speaker, though my sense is that the talk is more of a thought-experiment/parable than anything else. He hasn't quite said that, it's just my impression.


Wow, this looks great. Nice one Glen. I'm looking forward to trying this out. I'm going to, ASAP.


I found this link that was linked by the article to be very interesting: http://blog.npmjs.org/post/101775448305/npm-and-front-end-pa...


Slightly off topic, and sorry if it seems obvious, but his coding workflow looks really neat. He must be using a Chrome extension to live-reload the changes? Did anyone recognize the editor?



The editor was webstorm with a very minimal UI


He talks about using live-server[1] about ~2m in. It injects the live reload script into the page it serves so no plugin needed.

[1] https://github.com/tapio/live-server


I wonder if there is a way to split the build in more then just one big file but in several smaller files then can be loaded on runtime.


Use WebPack and set up code splitting


Finally we have promises support built into JS.


What is the difference between this and require.js for loading modules? I feel like I'm missing something


require.js loads modules that are written to the CommonJS and AMD standards. systemjs loads those too, but it also loads a few other things including modules that just dump things in to global scope and things written to the ES6 standard.

It's sort of like require.js on steroids.


gotcha, thanks!


[deleted]


The article and cast was specifically about making the development flow for frontend javascript more robust. What are you complaining about? This article has nothing to do with advocating JS for "critical systems, frameworks and back end".


I hope you're not serious.


Still has more WTF moments than any other language I have used so far.


Your comment might be useful or interesting if you explained which WTF moments affected you in particular, or even just which other languages you've used. As it is, it's not adding much to the conversation which is why you've been downvoted.


Some WTF moments in Javascript, courtesy of Gary Bernhardt:

    var foo = ["10", "10", "10"];
    foo.map(parseInt);
    // Returns [ 10, NaN, 2 ]

    [] + [] // ""
    [] + {} // {}
    {} + [] // 0
    {} + {} // NaN

    var a = {};
    a[[]] = 2;
    alert(a[""]); // alerts 2

    alert(Array(16).join("wat" - 1) + " Batman!");
Press F12 and use the Console to verify these if you're skeptical.


While these are WTF moments, when is anyone actually going to run [] + [] in a real project? The map to parseInt is the only one that's even close to something you'd actually write.


Writing literal [] + []? No.

Writing x + y where x and y are both arguments to a function, and some call site was passed an (empty?) array of integers instead of an integer? Believable.


So the real problem is passing wrong parameters to a function? Sounds like you should swap to something like TypeScript for strict typing then.

Or you know, stop acting like JavaScript is unique in that improper function calling breaks your code.

I can't believe in 2015 there are still people who follow the "JavaScript equalities are WTF" mentality. If you are running into equality operator problems in JS, you are probably going to run into a myriad of problems in any language.


Except in any decent language, however typed, improper function calling creates errors.

I can't believe in 2015 there are still people who follow the "invalid input should produce invalid output" mentality.


I've been writing JS professionally for, what, a decade now - and I've never once had this issue.


It's not about this causing issues IMO. It's just.. why? Why would '+' not be commutative? Why not throw an error? What is the use-case for "adding" '[]' and '{}'?

Or, put another way: why would a programmer want to have this "feature" instead of being notified: "hey, you're adding '[]' and '{}', that doesn't make any sense, fix that!"

Sure you can work with a language like that. But it sure doesn't feel like somebody thought all of this this through.


The first one is easy to understand though.

parseInt takes two arguments: $thing_to_change and $radix; map iterates over an array and feeds it $value and $index. You're getting parseInt("10", 0); parseInt("10", 1) and parseInt("10", 2);

The fix would be to partially apply parseInt with your defined radix;

  var foo = ["10", "10", "10"];
  var base10 = function(val){ 
      return parseInt(val, 10); 
  };
  x = foo.map(base10)
  [10, 10, 10]


A lot of the ones he presented are easy to understand. It's still a WTF when you run into it though.


All you have done is used a function without understanding what it was doing, or reading the documentation. Most JS developers know how parseInt works, and even if they run into this problem, would quickly discover the cause. I don't see how this is a flaw of Javascript; it could happen to a developer of any language, if their strategy is 'well, it looks like it'll work'.


> All you have done is used a function without understanding what it was doing, or reading the documentation

These aren't my examples. I haven't done anything. I credited the person who provided them: Gary Bernhardt.

https://www.destroyallsoftware.com/talks/wat

https://www.destroyallsoftware.com/talks/the-birth-and-death...

Next time before you make an accusation, reread the post before pressing the reply button.


Nope. Behavior should be pretty straight-forward, I shouldn't have to load up the docs to convert something to an integer.


I guess my question is: what would have to change, in the last example, to make it not WTF to you? To me it seems pretty straight-forward what is happening.


These are pretty bad examples. I can safely say that I have never run into these in the wild, or seen any other developer run into these.


I know what the batman example gives but I don't get how it's a WTF?

I assume the expected output is "wa" but why should a string less an integer produce that?


There is no expected output. It doesn't make sense to subtract the value 1 from the string "wat". But Javascript will cast both of them to numbers, then try to subtract 1 from NaN. However, if you do "wat" + 1, Javascript will cast both of them to strings, and append "1" to "wat".

It's not just the odd behavior, but the inconsistency.


Ok that makes a bit more sense. I didn't even think about how it would be if you tried to add them.

I like javascript but I don't do anything so complicated (or maybe not the right types of things) that I run into many of these situations.


No, it produces NaN (Not-a-Number), and then repeats it in a string 16 times. The WTF is that NaN can be concatenated to strings as "NaN".


NaN is passed to Array.prototype.join() as the separator to join strings with, so it gets coerced to String ("NaN"). It's deliberately working backward from the behaviour of Array.prototype.join() to create a contrived example for giggles, not a WTF.

It's no more a WTF than this, which follows the same principle: https://gist.github.com/insin/1183916


Oh, I understand that. I was more trying to explain why someone might think it was a WTF.

And I'm pretty sure your example returns the decimal 15. Comma operator returns the last element in the list, and parseInt will truncate strings with non-number-like text to the number-like part. Here, the number-like part is a hexadecimal code, triggering another feature of parseInt that it can figure out the radix on the fly.


Most of these WTF examples basically boil down to JS doing type coercion willy-nilly. This 'feature' makes writing conditionals slightly shorter, in exchange for introducing the possibility of massive bugs everywhere in your code at any moment. Seriously, f* JS type coercion.

The other misfeature I hate is that accessing undefined properties doesn't raise an error (then, but you can be sure it will make your program blow up a bit later).

Typescript helps to solve both.



I'm actually trying to write a book about that myself. It's called "JavaScript es basura caliente: Learning JavaScript in Anger".

The goal of the book is not to rag on JS. That's not new or really interesting in its own right. The goal is to walk people through the oddities of the language such as identity loss when passing a function from an object to something else (the good old this == window rather than self).

There are interesting, and annoying, things about JS that are non-obvious to people from a different language like Java or C#. There are other issues like testing and package management that are either assumed or glossed over. The JS community knows about them. Unfortunately for most of us, the responses to the language's weakness and ecosystem strengths are dispersed through the interblogs.

In fact from personal use of JS and some research on the book, I'm moving to a functional approach with JS. OO in Ecma5 is a pain. Pure (or Clojure like) functional can work with a bit of help from Underscore. That pardigm seems to fit the mentality of JS better too.

https://leanpub.com/javascriptesbasuracaliente


Here you go: http://wtfjs.com



You didn't link to it and it's still pretty irrelevant to the original discussion.


Instead of creating yet-another-library, yet-another-framework or superset (TypeSscript, etc.) of it we should try to fix JavaScript itself. Why companies don't push for this even if it's in all their best interests is beyond me.


They are. That's what ES6, a major focus of TFA, is.


They are not fixing JS, they are extending it. That's a different thing. Because backwards-compatibility.


Node.js is the only real dev language.


Node.js is not even a language. Talk about factual correctness.


Node.js is just a Reactor Pattern implementation with some package management on top.


JavaScript is broken by design (eg. adding obj properties on the fly is killing perf and toolability) we need replacement.

Why web can't get such a nice lang like C#, Swift?

In the mean time I will stick with Dart




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: