God, he makes it look good. The demo does give me a few of the ol' "too much magic" heebie jeebies, but frontend JS development is so badly lacking in compelling packaage management stories that I'm going to give jspm a shot.
Here's to 2015, I suppose.
P.S. Glen, if you're reading this, I really would love to see a youtube of a DJ hooked up to automatically synced GIFs. Don't be a tease.
npm is not just a server side package manager. It handles package management for the front-end very well and has for a long time, learn more here -- http://browserify.org/.
I'm a bit torn over using npm this way. On one hand, being able to use tools already built in Node land such as the EventEmitter is nice. On the other hand, having to use Browserify and navigating the mess of node modules is a pita. If the dependency structure with Node modules for npm dependencies were handled better, I would likely be all for it, but it is a major negative for me, especially if a quick project starts off as a mix of a Node server and contains a moderate amount of frontend heavy logic.
I happily use Webpack and NPM and have no problems with it.
NPM is in fact fine for frontend package management, it's not just about being able to use "Node land tools". All my frontend deps are on NPM, I published some NPM packages myself, and I don't even use Node.
Yes there are small issues (like wanting to be able to specify non-dupe requirement for some libraries) but overall NPM works fine for frontend. Much better than Bower if your app is moderately complex.
Agreed. I never saw the point of Bower, since it requires Node itself anyway. With tools like Browserify (and, I assume, Webpack... haven't had a chance to use it yet), you can load dependencies into your project extremely easily: `npm install backbone --save-dev`, then `var Backbone = require('backbone');` in your code.
I am not sure familiar with jspm or bower. In the limited 60 second window that i researched their site it seems jsmp doesn't have as much documentation as broswerify. If jspm comes up more often in npm then I will think about it.
If you have new projects you can try this out on with out too much risk, then please do give this a go and report/blog about it!
(I, too, am extremely dissatisfied with the current state of ES5-based JS modularity. Frankly it's incredible that anyone can get anything done at all given the state of tooling for modularity/building/deployment ATM.)
Bower is great if you need to install jQuery or one of its plugins, or another very popular JS framework.
When you get into smaller modules, persistence, recursive or deep dependency trees, versioning, automated bundling (i.e. no manual "config" files), etc. then bower is not a strong choice.
My biggest issue with the recent additions to the language is that there's now a thousand different ways to do the same thing.
Iteration:
for (var i = 0; i < y.length; i++) { ... }
for (var x in y) { ... }
for (var x of y) { ... }
y.forEach(function(x, i) { ... })
Object.keys(y).forEach(function(x) { ... })
Comparison:
==
===
Object.is() (would have been a good laugh if introduced as ==== instead)
Of course, this doesn't matter much if you're a single developer. I've started writing a bit of ES6/ES7 and it's pretty cool. But it's going to be a PITA for projects built by many developers of varying experience levels. The nice things about smaller languages is that there's often only one way to do something, so when you write code or review other people's code, your mind is free from the minutiae and you can focus on the big picture instead.
It's a bit funny that it's when JS is, from the general consensus, finally getting "better" that I'm actually considering more and more switching to a small but well-built compile-to-JS language. I guess smallness and simplicity just matter a lot to me.
I would think that most languages suffer from this at least as much as JavaScript. The solution is to have guidelines and enforce them through code reviews. Linters can also catch some of the rules.
I'd say that JavaScript's benefit is that it's so simple that there are not too many solutions to do the same thing, unlike massive enterprise languages like C# and Java.
for (var i = 0; i < y.length; i++) { ... }
for (var x in y) { ... }
for (var x of y) { ... }
y.forEach(function(x, i) { ... })
Object.keys(y).forEach(function(x) { ... })
None of the "for" variations are considered good practice in ES6. You should be using "let" (or "const" if it's allowed here) to avoid var-hoisting of "i".
Personally, I'd advocate using "for" if you have a need for early return/break/continue -- otherwise I'd go for the first forEach() variant. Or, even better, use "map" and avoid side effects in the function you're mapping over the collection. Unless of course you're doing something imperative.
The fact that the last forEach() variant is possible is a good thing, though I wouldn't recommend its use in this case because it's needlessly complex -- it shows that the language/stdlib is becoming more compositional.
Yes, "let" is better than "var". I could also have used a fat arrow in the forEach(). But my point was to list iteration variations, so outside of that I wrote traditional ES5.
This illustrates the issue though. "var" is like "let" but without block scoping, so you should almost never use "var", but it's still there to trip newcomers. The fat arrow is like the "function" keyword and most of the time you can use them interchangeably, but if you rely on "this" they're not interchangeable anymore.
This growing laundry list isn't exactly thrilling. I'm glad to have map(), filter(), every() and friends, though.
Thanks to Crockford we got a decent ES5. Remember that several syntax changes got postpones to ES6. And don't forget about "E4X", a beast that was supposed to be JavaScript 2? http://www.ecma-international.org/publications/standards/Ecm... It got similar traction as XHTML 2. Both had no backwards compatibility - an insane idea. Some new features in ES6 look like Crockford "good parts" movement lost and Sun/Oracle Java evangelists won.
Hopefully Douglas Crockford updates his "JavaScript: The Good Parts" in time for JavaScript 6.
> My biggest issue with the recent additions to the language is that there's now a thousand different ways to do the same thing
To be fair though, this has been an issue with Javascript since its creation (and has been getting worse as the language has been expanded while maintaining backwards-compatibility).
Many other languages have similar problems (Ruby is an offender that comes to mind, though it's perhaps not on the level of ES6+)
I'm not much of a polyglot, but one language which seems to have "one obvious way" as part of its design choices that springs to mind is Python. I have a hunch that pure-functional languages would be less choicy as well, though I have no familiarity with any.
What's interesting is that all of the 'niceness' seen is the result of a switch to the functional style.
Excluding the singleton class (which could have been a single function itself), you've got your map/filters in the gif processing, and encapsulation of async activities[0] through monads via promises.
Seems like JavaScript got good when it started embracing what functional programmers have been drowned out saying for years.
Looking good indeed.
[0] Async actions as a language-level construct, as opposed to a syntactic abstraction have always been A Bad Idea. Promises should have always been the default, and manually writing callbacks should have never been a thing.
var images;
get("http://example.com/images.json", (err, resp) =>
if err throw err;
images = resp;
even though the former might be internally implemented as the latter.
In addition, the former doesn't give the programmer the 'opportunity' to cause a race-condition, and encapsulates the failure entirely for you (automatically and defaultly); If `images` ends up erroring, and you use it again, then it'll short-circuit without crash, very similar to `then()`.
That's the magic of monads: the errors are all handled for you (in an encapsulated, lossless way), and you can deal with them (if you choose) at the end of the chain, just the same as promises (because promises are a monad).
However, my comment above had nothing to do with how you handle errors, it was about the unsoundness of comparing code that handles errors to code that doesn't.
Right - What I'm saying is that the above former code does handle the error, in the exact same way as the latter code.
For clarification, the error is handled implicitly (but you'd know what kind of error it was due to the type signature), but you can always handle it in manner you choose to at any point.
It's not really the same as exceptions in a typed language because it's forced to be delimited. Even in Javascript you'll have to do a little ceremony to "escape" the golden path driven by the monad, though you'll have many ways to take short-cuts and forget details.
Great screencast – lots of detail in a short period. No messing around, it also shows how an optimized workflow should look like. The audio/video quality of the screencast was extremely good as well.
What will SystemJS+JSPM give me over Webpack? Just curious.
Webpack also supports ES6 modules (as well as CJS and AMD) when used with 6to5 transpiler, and has a lot of great features like splitting code into bundles, figuring out the best way to split dependencies based on module size, and asynchronously loading missing chunks.
Haizz, I've just learned to use Webpack. Now comes SystemJS+JSPM. There are just too many new tools, workflows and different ways of doing things in Javascript world.
You don't have to try to learn everything. I know HR throws the kitchen sink into job listings, but it's okay to specialize in only one copy of a particular tool.
No, because SystemJS+JSPM does nothing to invalidate your Webpack knowledge. It's still an active project that works just fine for the needs of many developers.
This is a trite response. If it does something significantly better than it does do something to invalidate knowledge in a defacto sense, at least for the caring craftsman.
Not that there's any point complaining about it either though...
Agreed on there being too many tools. Thankfully most of them are starting to embrace the same core feature: writing npm modules. Small bits of code that you publish once to npm, and then reuse across many projects and potentially many build/workflow systems.
The main issue is really the lack of module concept in Javascript core. I hope things get better with ES6, because at this point it will be much, much easier to write a library to load them all, without relying on "proprietary" loaders.
Aside: "A DJ using Ableton Live, a huge bundle of MaxMSP emitting a UDP stream of beat information (courtesy of the immensely pro Cade), a UDP ➝ WebSockets server, and DJGif pulling hundreds of GIFs off various Tumblrs to beatmatch <x-gif> on two projectors makes for a hell of a good show."
Does anyone know why he wouldn't have used the midi clock from Ableton (or other DJ software) to a "midi->websocket" server?
One reason might be that MIDI clock is just a repeating, identical message. It does not contain any position information (like where the next bar starts). To get positional information relating to beats and bars, you need to use MIDI Song Position Pointers, however these use a 14 bit counter which will typically overflow after about 10 minutes. In short, long-running musical clock sync over MIDI is problematic.
For syncing sound recorders, MIDI Time Code (MTC) is usually used, which works fine for up to 24 hours, but doesn't contain information about bars and beats, so unless the system receiving the MTC has a-priori knowledge of what time locations will correspond to bars and beats (which it typically wouldn't have in the context of a live performance), MTC can't be used for this purpose.
Sweet! Thanks for the detailed run down. As a DJ myself, at my residency we have a random splattering of strange movie clips that we just play and they are never beat matched but mentally something "always" hits a beat when you are watching it. I guess he's going for stuff actually changing in musical time.
That's assuming you want all your scripts to be loaded on the front end asynchronously which seems to be only for development. For production I think you'd still want to compile this all down for a faster load time. From the jspm page: "For production, use the jspm CLI tool to download packages locally, lock down versions and build into a bundle." So I think you're correct that there'd be no build step for development but you'd still have some sort of script being run, whether it be gulp or otherwise, that builds it for production use.
yep, the point though is that having a build step during dev adds some overhead (which can easily go into the several seconds range in my experience) into the type-save-reload cycle, which is exacerbated by hit-reload-before-build-finished-so-need-to-reload-again pattern.
My understanding is that this tool removes that entire class of annoyances, and gives a no-hassle live-reload ES6-enabled environment on top.
The build step is mitigated by incremental reloading, and the incremental reloading can be tied to a live-reload event.
So you end up with the same workflow as shown in this video; but most likely faster for many modules (browser requests 1 JS file rather than potentially hundreds) and also more realistic for production (aside from minification, the dev environment is the exact same as the production environment).
Looks like an interesting workflow, but I think I still prefer using a compile-to-js language like CoffeeScript over the use of shims. I know it's a silly thought, but to me, shims sort of violate the separation of responsibilities between the user and the developer. If I, as the developer, want to write more concise, scalable, and convenient code, it should be my responsibility to spend my development resources to convert that code to something the user's browser can understand.
I'm really late to the JS party, having just picked it up a month or so ago. node.js and Meteor are my current tools of choice; it's a rather super environment to get stuff done in.
In fact, JavaScript is the one late to the party, having developed some rudimentary tools only recently. There's nothing exciting about redoing old mistakes in a new ecosystem.
I'd be curious to see how a medium to large project (500+ files) works with this approach? I'm guessing it would take a while to load the page since the browser has to request each script?
The tool looks pretty good. I think the most important feature is working with existing modules on npm. :) This means that most browserifiable modules can be jspm'd and vice versa.
In production you would bundle all into one file, while in development most/all of your files will be cached by the browser. But to be honest I've never worked with 500+ in the browser.
Gary Bernhardt is the speaker, though my sense is that the talk is more of a thought-experiment/parable than anything else. He hasn't quite said that, it's just my impression.
Slightly off topic, and sorry if it seems obvious, but his coding workflow looks really neat. He must be using a Chrome extension to live-reload the changes? Did anyone recognize the editor?
require.js loads modules that are written to the CommonJS and AMD standards. systemjs loads those too, but it also loads a few other things including modules that just dump things in to global scope and things written to the ES6 standard.
The article and cast was specifically about making the development flow for frontend javascript more robust. What are you complaining about? This article has nothing to do with advocating JS for "critical systems, frameworks and back end".
Your comment might be useful or interesting if you explained which WTF moments affected you in particular, or even just which other languages you've used. As it is, it's not adding much to the conversation which is why you've been downvoted.
While these are WTF moments, when is anyone actually going to run [] + [] in a real project? The map to parseInt is the only one that's even close to something you'd actually write.
Writing x + y where x and y are both arguments to a function, and some call site was passed an (empty?) array of integers instead of an integer? Believable.
So the real problem is passing wrong parameters to a function? Sounds like you should swap to something like TypeScript for strict typing then.
Or you know, stop acting like JavaScript is unique in that improper function calling breaks your code.
I can't believe in 2015 there are still people who follow the "JavaScript equalities are WTF" mentality. If you are running into equality operator problems in JS, you are probably going to run into a myriad of problems in any language.
It's not about this causing issues IMO. It's just.. why? Why would '+' not be commutative? Why not throw an error? What is the use-case for "adding" '[]' and '{}'?
Or, put another way: why would a programmer want to have this "feature" instead of being notified: "hey, you're adding '[]' and '{}', that doesn't make any sense, fix that!"
Sure you can work with a language like that. But it sure doesn't feel like somebody thought all of this this through.
parseInt takes two arguments: $thing_to_change and $radix; map iterates over an array and feeds it $value and $index. You're getting parseInt("10", 0); parseInt("10", 1) and parseInt("10", 2);
The fix would be to partially apply parseInt with your defined radix;
var foo = ["10", "10", "10"];
var base10 = function(val){
return parseInt(val, 10);
};
x = foo.map(base10)
[10, 10, 10]
All you have done is used a function without understanding what it was doing, or reading the documentation. Most JS developers know how parseInt works, and even if they run into this problem, would quickly discover the cause. I don't see how this is a flaw of Javascript; it could happen to a developer of any language, if their strategy is 'well, it looks like it'll work'.
I guess my question is: what would have to change, in the last example, to make it not WTF to you? To me it seems pretty straight-forward what is happening.
There is no expected output. It doesn't make sense to subtract the value 1 from the string "wat". But Javascript will cast both of them to numbers, then try to subtract 1 from NaN. However, if you do "wat" + 1, Javascript will cast both of them to strings, and append "1" to "wat".
It's not just the odd behavior, but the inconsistency.
NaN is passed to Array.prototype.join() as the separator to join strings with, so it gets coerced to String ("NaN"). It's deliberately working backward from the behaviour of Array.prototype.join() to create a contrived example for giggles, not a WTF.
Oh, I understand that. I was more trying to explain why someone might think it was a WTF.
And I'm pretty sure your example returns the decimal 15. Comma operator returns the last element in the list, and parseInt will truncate strings with non-number-like text to the number-like part. Here, the number-like part is a hexadecimal code, triggering another feature of parseInt that it can figure out the radix on the fly.
Most of these WTF examples basically boil down to JS doing type coercion willy-nilly. This 'feature' makes writing conditionals slightly shorter, in exchange for introducing the possibility of massive bugs everywhere in your code at any moment. Seriously, f* JS type coercion.
The other misfeature I hate is that accessing undefined properties doesn't raise an error (then, but you can be sure it will make your program blow up a bit later).
I'm actually trying to write a book about that myself. It's called "JavaScript es basura caliente: Learning JavaScript in Anger".
The goal of the book is not to rag on JS. That's not new or really interesting in its own right. The goal is to walk people through the oddities of the language such as identity loss when passing a function from an object to something else (the good old this == window rather than self).
There are interesting, and annoying, things about JS that are non-obvious to people from a different language like Java or C#. There are other issues like testing and package management that are either assumed or glossed over. The JS community knows about them. Unfortunately for most of us, the responses to the language's weakness and ecosystem strengths are dispersed through the interblogs.
In fact from personal use of JS and some research on the book, I'm moving to a functional approach with JS. OO in Ecma5 is a pain. Pure (or Clojure like) functional can work with a bit of help from Underscore. That pardigm seems to fit the mentality of JS better too.
Instead of creating yet-another-library, yet-another-framework or superset (TypeSscript, etc.) of it we should try to fix JavaScript itself. Why companies don't push for this even if it's in all their best interests is beyond me.
Here's to 2015, I suppose.
P.S. Glen, if you're reading this, I really would love to see a youtube of a DJ hooked up to automatically synced GIFs. Don't be a tease.