Hm, at first glance I am as well, especially with tagged template strings. Those look particularly simple but powerful.
Generally though, is it reasonable for features such as these to be included at the language level? I primarily use ES5 JavaScript so I'm used to pulling in a decent amount of modules (in this case handlebars, underscore, etc.) for things like templating.
But I will be relieved to have a language-standard solution to what feels like such a common problem for the environments in which JS is used primarily, despite the tradeoff that's made in accessibility due to feature bloat.
Unsafe string-munging is too easy and seductive. Doing things the safe/secure way needs to be at least as easy.
And it makes for a growable language, the way Lisp macros do. E.g. https://github.com/erights/quasiParserGenerator
This can reduce the demand for feature bloat in future language standards.
I too was reluctant of using many of the new ES6 features, but as I've begun to adapt them I've come to appreciate them a lot! In fact, when using Babel to compile ES2015/2016 you can in many cases rid yourself of things like Underscore/lodash[1], Promise libraries, etc.
I've recently started a new project in React/Flux, and I've cut away 3 or 4 dependencies compared to previous projects by fully embracing ES2015/2016, as Babel automatically provides the necessary polyfills and compilation.
As for template strings, I think they make sense. Most programming languages have them in someway or another, like Ruby "hello #{world}" or Python 'Hello {world}'.format(world='World').
lodash has ~200 modular methods. That means it covers much more ground than the handful of ES5/6 built-ins provided and because it's modular you can use what you need without the stuff you don't.
Most modern languages already have this feature, JavaScript's the strange one to not have a quick way of putting variables into a string and the first to have to name it though because it was missing it for so long. And by your comment have evidentally chosen a confusing name. This isn't to replace templating engines.
Admittedly you could trivially extend the string prototype to have something more akin to string.format or printf, which was something you couldn't do in other languages for a long time, making it less of a problem.
I don't understand why everyone considers classes controversial. If you look at code in the wild, it already does what classes desugar to: everyone already writes constructor function and attaches properties to their prototypes.
At least now with classes, users coming from other languages wont be as tempted to make their own completely incompatible object systems.
In es6 it's finally possible to properly subclass internal types, it properly separates Class properties and prototypes, and it allows users to extend arbitrary classes.
While my example isn't an issue in ES6, most of the controversial came from CoffeeScript. Using classes comes with classical inheritance expectations. In CoffeeScript, changing class variables would carry through all instances, and that is unexpected from a classical inheritance point of view. However, it made complete sense in the compiled JS.
My favourite feature is promises, while it doesn't add a new syntax and you can probably add a library for it, the fact it's standardised makes a world of difference. Now that it's standardised
1. It will become the common interface for deferred operations and library authors can make assumptions that it's there.
2. ES7 Async/Await will be able to leverage this common interface.
My problem with promises is that delivering it without async/await is just awkward. Because, yes, in the ES7 async/await world, you're going to want to use promises heavily because they'll work so great with that.
But in the ES6 world, making your API promise-based makes it heavier-feeling and more awkward than simple, clean callbacks. (Which get an unnecessarily bad rap from people who don't work primarily in JS; there are cases when callbacks get awkward, but just looking at code that's four levels of indent deep and declaring it "callback hell" -- as you so frequently see -- is a super-shallow analysis. That code is clean and easy to understand, as often as not.)
For the near term, you have to use something like babel anyway. Beyond that, generators and promises need to be implemented ahead of async/await as they are prerequisites for it.
Even better: The arrow function example used the statement form of an arrow function, instead of the expression form, which allows you to omit the return:
I'll concede that the arrow function is a bit overly complicated, with this, sometimes-optional argument list parenthesis, and object literal/function body syntax ambiguity.
I was just talking to my friend who works at NetFlix on the frontend team about ES6, he is most excited about destructuring `let { name, age, gender } = user;`. I however advocate that the new class syntax is the best part of ES6. Take the following trivial OOP example, which I think reads so much easier a lot like PHP.
"use strict";
class Vehicle {
constructor(name) {
this.kind = 'Vehicle';
this.name = name;
}
printName() {
console.log(this.name);
}
}
class Car extends Vehicle {
constructor(name) {
super(name); //call the parent method with super
this.kind = 'Car';
}
}
let myCar = new Vehicle('Mercedes');
console.log(myCar);
I think it's fair to say that people who want to write object-oriented Javascript are very excited by class. Those of us who have found that the non-OO nature of Javascript is part of what makes it so productive are... much less excited.
JavaScript is object oriented just like any other language. The difference is that it is prototype-based OO instead of class-based OO.
I would say that JavaScript's OO is actually more powerful/flexible than other languages but it is less readable and more error prone if you're not careful - For example, the fact that objects can 'borrow' a method from other object an apply it to themselves is a common source of issues for beginners.
Flexibility is not 'bad' - it's a tradeoff. More flexible languages are better for writing DSLs, etc; but using them means a lot of self discipline regarding code style.
One thing I wish JS had is a PEP8-style canonical style document. I would find that more useful than classes.
On classes, I am among the less excited about them; however, we are in a ridiculous situation where there are about 500 different library implementations of class inheritance. Backbone has one. Ember has one. Node has util.inherits. Various transpile-to-JS languages (CS, TS) all have their own slightly-different implementations of the resulting prototype code. There is plainly a need for classes in JS, felt by some of the leading projects in JS land.
ES6 classes at least gives all these disparate implementations a refactoring target. How many of them will get there is another matter.
> using them means a lot of self discipline regarding code style.
That's bad. Few developers have self-discipline. The ones who do still have different rules than you do. That makes reading, understanding, and modifying their code harder. Sounds bad, doesn't it?
There's a reason the most experienced devs rage against extremely flexible languages like PHP and JavaScript and are excited about new, ultra-rigid languages.
A lot of other languages makes you write a lot more code for some things that js can express very shortly. I usually find that the js version is more readable. The problem, I think, is that most programmers are used to class-based inheritance and not prototypical inheritance.
Yup. The class keyword is a huge step backward, and it was only put it because of a very different world several years ago when all this was put into motion. It is just a wrapper around prototypal inheritance, but hides a lot of the features in favor of making things look like other OOP languages, even though it isn't.
It is also very debatable if classic OOP is even good at all. JS was heading in the right direction, with more functional approaches, factory based object creation, etc... and now it's all back to square one.
Fortunately I work on a product where most devs have functional programming backgrounds, so the argument isn't too hard to make. But I fear for the community in general.
Why can all this new syntax be added, but the "use strict" pragma is still specified by a string? It made sense to send messages to the interpreter in a backwards compatible way, but now it just seems odd.
One of the reasons that it's a big deal is because of what it means for functions. It helps make named-parameters more attractive, which can help a lot with code maintainability.
I'm against block scope in a way. It's semi-useful, but it's just sugar. So many features of ES6 seem to be designed for people who don't want to bother learning javascript: classes, block scope, arrow syntax, etc. It's just trying to shoehorn javascript into the template every other language follows when javascript is not every other language.
That being said, I like iterators, weak maps, typed arrays, TCO, and so on.
I'm a bit worried about ES6 making the language harder to understand.
For example, scoping: For backwards compatibility reasons, var has to stay function scoped. But now we also have let, which has different scoping rules. Also, Symbols: [Symbol.iterator]() - what? Why?!
On the other hand, stuff like arrow functions, template strings, modules and tail optimization are awesome.
Can someone explain the reasoning behind having any kind of backwards compatibility? Who would pick a worse-but-compatible ES6 over a better incompatible version?
For example, the scoping rules: why have two separate scoping rules for let and var? To put it another way, would ES6 have let and var with differing scoping rules if it was designed today? If not, then it's a flaw (any difference from what a clean redesign would look like I consider a flaw).
Browsers aren't likely to implement two separate languages either. Furthermore, forcing people to pick between "ES5" and "ES6" if they just want to use a single ES6 feature is pretty bad.
So your fundamental choices are really an ES6 that is backwards-compatible or an ES6 that is not used.
On the other hand forcing newcomers to learn the warts of ES5 when they could learn what is becoming a reasonably elegant language isn't ideal either. Two scoping rules, for example.
An ES-latest runtime + transpilers would be all that browsers needed.
This has in fact been debated back and forth on the es-discuss mailing list and in TC39, especially given experiences with strict mode. The decision was generally made to not have more modes and to just have a single JS language.
You don't have to agree with that decision obviously, but this isn't something that happened willy-nilly.
What about it is not simple and consistent? Two different syntaxes, and two different expectations and rules. Var is function scope, and let is block scope.
I feel like in a new design you would pick one or the other, as a second scoping rule would not carry its own weight in terms of what problems it solves vs the complexity of implementing runtimes or learning the language. In that case I'm almost certain you would go for block only as it is the default in nearly all curly braced languages.
I agree with this list completely, but it doesn't mention one of the best reasons generators and promises are awesome: async/await-style asynchronous programming eliminates callback hell by letting you write asynchronous code as if it were synchronous, including using control structures like conditionals, loops, and try/catch.
And you can do it in (most) browsers today using a transpiler like Babel and a library like Blurebird, Q, co, or task.js.
The rest of the things are nice, but promises/generators (and eventually async/await) are game-changers.
The problem with Javascript generators, however, is that you're not allowed to call other functions from them, which yield. This very much limits the kind of async stuff you can do with them.
Not really (at least not the kind of stuff I mentioned), those functions can just return promises which the caller yields: http://codepen.io/anon/pen/WvzGLa
Granted async/await syntax will make this a bit nicer:
async function funcA() {
await sleep(1000);
// ...
I think .bind(this) could be added as a (more common?) alternative example to var self = this;
Personally I'm most excited about the module system. It's by far the most confusing thing for our students (and hard to google since you end up going down the rabbit hole of build systems etc.)
With the introduction of arrow function you won't need to use .bind(this) as arrow functions capture in their lexical scope the variables/keywords: this, arguments, super.
Yes, maybe my comment wasn't clear. I meant to say add ".bind(this)" as a case that is replaced by => just like "var self = this;" or (...)(); for that matter.
I've just been playing with Promises and like them a lot. But one thing I find strange is that ".then()" creates a new promise, but with no way to reject it.
ie. I can't write:
return new Promise((resolve, reject) => {
// Do some stuff, call resolve()/reject() on success/failure.
}).then((step1Val, resolve, reject) => {
// Do some stuff, call resolve()/reject() on success/failure.
// But this doesn't actually work, because the "then" callback
// doesn't get resolve/reject as args.
});
Instead I'm having to write this as the following, which is more verbose:
return new Promise((resolve, reject) => {
// Do some stuff, call resolve()/reject() on success/failure.
}).then(new Promise((resolve, reject) = {
// Do some stuff, call resolve()/reject() on success/failure.
// But this way I don't get access to step1Val.
}));
Another bummer of this style is that that the second step doesn't get access to the first step's value.
Promise.resolve() and Promise.reject() return a promise resolved or rejected to the value passed in the first argument. Returning a promise in the fulfillment function passed to .then() chains the promises together.
somethingThatReturnsPromise()
.then((foo) => {
return foo.bar;
// Or if you like it more verbose
return Promise.resolve(foo.bar);
// Or pass bar to a function modifying bar that returns a promise
return modifyBarReturnPromise(foo.bar);
})
.then((newBar) => {
console.log(newBar);
});
In your case, if you need step1Val in next promise chain, I personally do this, however people more familiar with promises may know of a better way to do it (maybe with something like Promise.all() or Promise.props() in the BlueBird library).
It took me a while to understand this correct response because I couldn't understand the documentation for promises. For the benefit of any other person who was likewise confused...
then() returns a new promise resolved to the return value of the function. However, if that value is itself a promise, then it follows the promise chain and passes the eventual state to the next then()/catch() call.
You don't need resolve or reject. Just return the promise:
.then(val => doSomeStuff())
Thats it. The resulting promise will get resolved/rejected based on the return result of doSomeStuff()
You can also use throw to reject a promise manually, or return Promise.reject()
.then(val => {
if (something) return successVal;
else throw new Error("Failure");
});
For more complex scenarios where I need the values from previous actions, I like the forgoing the chaining and using a join helper to unwrap any set of promises as needed:
let url = getUrl(resource);
let data = url.then(url => fetch(url))
let updateData = d => _.assign(d, {field: 'newVal'}
let update = join(url, data, (url, data) => sendUpdate(url, updateData(data))
I like Jquery's Deferred better then ES6 Promise. Promises lack the `always` callback, they don't have any `progress` events.
The spec on mdn[0] doesn't mention asynchronous `then` or `catch` behavior. If the callback in Jquery `Deferred#then` returns a Deferred that deferred will be returned by `then`.
// Basic async function, resolves after n milliseconds
function wait(n) {
var promise = $.Deferred();
setTimeout(promise.resolve, n);
return promise;
}
wait(10)
.then(function() {
console.log('first'); // prints first after 10 milliseconds
return wait(10);
})
.then(function() {
console.log('second'); // prints second after 20 milliseconds
return 'done';
})
You can 'flip' a failed promise by returning a resolve promise in the fail callback.
var promise = $.Deferred();
setTimeout(promise.reject, 100)
promise.then(null, function () {
return $.Deferred().resolve([]);
}).done(function(arg) {
console.log(arg); // Prints '[]'
})
Not quite :) See my paste above, it returns a new Promise in pending state that will return the value in the new then block ("5" here), possibly immediately. (i.e. pending for one event loop tick, then resolved)
In this example, I've sequenced the resolution of two promises (async functions return Promises when invoked)
If either of these Promises reject or throw an error, or if any of the code within the try {} block throws an error, the error will be caught inline in the catch block (one area only).
You can also see that both Promise's return values are in-scope and continually usable (vs. losing scope with 'then' chains).
I see. Not sure what the best practice is for the losing scope in then chains issue with promises. I ended up creating something along the lines of a message object designed for each chain, which felt like a smell.
IMO threading an object through is probably the best method, unfortunately. You could use an Immutable Record or something to help keep it under control.
I would personally just use async/await to more explicitly handle the sequencing/binding.
Because the initial one is where you'd interface (potentially) with non-promise code. E.g. in order to wrap a node-style function, you can't throw or return. But in general you shouldn't need to use `new Promise()`, that should in most cases be reserved to more general, low-level code (e.g. a promisify implementation).
I'm writing code against indexedDB (which doesn't use promises), but I want to expose promises to my callers. So I'm wrapping my indexedDB usage in Promises.
Also some of indexedDB doesn't seem like it would fit with promises, since some operations have 3 or more callbacks (onsuccess, onerror, onupgradeneeded).
There are a number of IndexedDb libraries that uses promises. However, there is or was a problem with how IndexedDb is specified to work that is not really compatible with promises. I'm not sure if they have fixed this yet.
https://github.com/promises-aplus/promises-spec/issues/45#is...
I can see the benefits of the indexedDb behavior: an open transaction is an exclusive resource, so leaving one dangling locks out other transactions. The auto-commit behavior means it's a lot harder to accidentally leave a transaction dangling. But it is unfortunate that this makes it not play nicely with promises.
It makes explicit that the intention that a promise should be rejected, as opposed to a throw where one might reasonably expect an exception of some sort.
Although arrow functions are nice, they've been around in the form of lambda expressions in C# and other languages for some time, so the initial reaction was more of a "about time" rather than a "wow!" for me. Same goes for generators/yield keyword.
Thanks for pointing out the key difference between arrow functions and inline functions being the context of 'this'. 'this' is going to be a source of confusion for a lot of people.
Yes, a lot of the new features are very "about time" but that doesn't remove the excitement to now have them as a part of JavaScript.
'this' without arrow functions is currently a huge source of confusion. I feel like the arrow functions will help to alleviate a lot of that confusion.
My biggest excitement is that in the past few years JavaScript is now regarded as a first class language, and we're seeing these new versions i.e ES 5,6,7 in a rapid development cycle. I remember being told in the 2000-2002 era that JS was a kiddle language and it was worthless on a resume.
Now just throw away the DOM, replace it with something more suitable for applications instead of documents, and we can have something akin to Smalltalk back.
That's because 99% of times when people use it it's an anti pattern: stackoverflow.com/questions/23803743/what-is-the-explicit-promise-construction-antipattern-and-how-do-i-avoid-it
The idea isn't "don't use promises", it's more "use/chain the promises you already got from libraries (e.g. AJAX calls), don't construct new promises yourself". See the StackOverflow question inglor linked to: https://stackoverflow.com/questions/23803743/what-is-the-exp...
Yeah what dcoder said. You only need to create a promise at the source of an async event like IO. Internal logic will be downstream of IO and therefore will be just be sequencing promises generated elsewhere. You do not (and should not) need to create intermediate promises to do that.
There is no harm in either. Overall, just have fun and learn JavaScript :)
ES5 is the current standard in all major browsers, so using its functionality could be considered "safe". Thankfully we have transpilers (will convert your ES6 code to ES5), so you can use ES6 features in production today - just remember they are converted to an ES5 implementation of said functionality.
Whether you're writing 5 or 6, it's still worth looking at and getting familiar with the various Javascript build tools (Grunt, Gulp etc). They're an increasingly important part of a Javascript developer's toolset, and using them to transpile 6 to 5 will get you off to a good start.
I understand them and use them quite a bit. I just don't understand why they're in the language spec. They don't add anything new to the language (vs say generators or symbols).
It allows web APIs built-in to the browser to return a promise, e.g. audioContext.decodeAudioData(data).then(...). If you just used a library, the browser wouldn't know how to wrap the returned value in a promise. It also guarantees interoperability between all APIs using promises, avoiding any incompatibility between different libraries.
Paving the cowpaths. Promises were already standardized before ES2015 arrived, but now you can just write libraries using promises without having to rely on the developer providing their promise implementation of choice. This also prevents scenarios where you might end up with N different promise implementations because you're using N libraries, each depending on a different promise module (e.g. bluebird, Q, jQuery.Deferred, AngularJS $q, etc) all of which are slightly different.
This also means more feature-rich implementations can just extend the native promises instead of implementing everything from scratch, though for me this pretty much spells the end of third-party promise implementations.
In other words, they've been added for the same reason as the various new helper methods: not because they add new functionality, but because they provide a reliable well-defined and consistent solution to a very common problem.
Modules? That may be a while, and isn't a big concern since until HTTP2 is in wide effect (which may be really soon anyways) it makes more sense to just package the files in to one file (which you can do right now) using Browersify, JSPM, Webpack, or what have you than having the client request a whole bunch of javascript files at once.
Leaving aside the committee preference, what value do you perceive in the fact that it's a standard? It can't be that browsers support it, because they don't. Sure Mozilla will probably get there within the next several years, and Chrome won't be too far behind, but forget about Safari and IE. So you'll still have to use Babel or something like it. Please note, there's nothing wrong with requiring a tool chain, which we already need anyway for minimization, compression, cdns, etc. But once we realize that, we realize that coffeescript is just the same as ES6 in that respect.
Well said. Many who rejected coffeescript because it required a tool chain now embrace ES6 which requires a toolchain. People rave about ES6 features like classes, arrow functions, destructuring, variable interpolation - as if they are something new. Maybe the best thing about ES6 is that it helped transpiling to be accepted as a mainstream technique.