The title should be "Writing JavaScript code that turns out to be fast and memory-efficient in the 2012 editions of JS engines".
When discussing the efficiency of a language, especially an interpreted language, one must always take into account the peculiarities of the used VM. And be sure that today's optimisations may be "pessimizations" in the future (see the many Linux optimisations that are being removed in the last years).
I would kind of like to read an article specifically about tuning performance in various IEs. (It would be, unfortunately, a very practical and useful article for me.)
A while back I wanted to put a JSON information string and a fairly sizable 3D model in the same request response, and then pull the JSON off without nailing the browser if the model was big. I keep meaning to test how well it works in different browsers as well.
Although it is possible that virtual machines change more rapidly than physical machines, isn't this also true for languages more directly targeting a physical machine, e.g. C?
I suppose what you're really getting at is that in JS we are subject to the whims of BOTH the compiler and the VM on the target browser?
I can personally attest from first hand experience that "code rot" due to not taking advantage of the peculiarities of the current JS VM (and yesterday's "techniques" no longer being suitable) are far more commonplace than in any other programming environment I've dealt with.
Also of note -- with C you make your product, compile it, ship it and largely forget about it. Its rare that PC games get slower, even if compilers change, because you don't keep recompiling it. However, if you make a JS game, that thing is interpreted from scratch every time its loaded. So if I make a JS game and ship it, the performance characteristics have a higher chance of changing in unexpected ways on me long after I've moved on.
(Note: This is simply a response to this particular aspect of performance tuning JS, I am obviously not getting into the benefits of interpretation which may very well outweigh these costs.)
This is horrible advice. There are plenty of things you want to delete, like references to elements when you don't need them anymore. A quick search of the Closure Library, jQuery, Backbone, Knockout, Angular.js, and ember.js found many uses of delete. Burn this article in a fire.
As for the supposed performance difference, let's see a jsperf on that.
I think the better take away might be: avoid changing the structure of 'hot' objects at runtime.
JS engines will zero-in on these 'hot' objects and attempt to optimize access to it, a task which will be helped if object doesn't change in structure over its life-time. 'delete' will trigger such a structure change.
By structure in context of JS, one should think inferrable structure like always setting .a to a Number then .b to a String after instantiating an object.
Agreed. But I'd worry about something like this once I notice I want to improve performance. In the book Beautiful Code one of the authors urged programmers to write beautiful code first then optimize when needed later. It's easier to optimize beautiful code than it is to beautify optimized code. Using delete is a very readable statement of intent.
Certainly delete can be useful, but nulling a property has almost the same effect (apart from hasOwnProperty still returning true) - if you're looking at optimising at the level this article is talking about avoiding delete might well be sensible.
This could also be horrible advice, because now you're mixing types which will make your code a little bit more complex, and you'll iterate over this null and have to deal with it. If you have an object property set to null you'll iterate over it, if you delete it you wont. As for the performance improvements, let's see the jsperf.
Looking back through my article I agree that stating delete should outright never be used didn't make sense. It's of course used in numerous JavaScript libraries and has a purpose in the language. I've updated the text to reflect my suggestion that it should be instead, avoided where possible. This advice stands as it's more difficult for V8 to optimize objects you're changing the structure of.
For others commenting on this thread, I've also taken account of some of your suggestions and tried to update the article to be as accurate as possible. Thanks for the input!
The author mentions that programmers use delete for dereferencing. What?? That is completely opposite from what 'delete' means and should never be done, ever. Use 'delete' to remove keys from a map, not as a stupid GC hack. Also, 'null != undefined'. They have different meanings. null will show up in a for .. in loop, whereas a 'deleted' member is actually gone.
99% of performance problems are due to wrong code, not inefficient VMs. Write better code and you'll get faster programs. For those problems in the 1%, profile and fix.
I don't know, those are awfully significant performance differences. On mine, the last is 357 times faster than the first - might be a test flaw, but yikes.
That said, this (and many others) is probably not a concern for most (even most JS heavy) sites, since most simply don't compute enough to impact the page as much as, say, reducing reflows.
To "delete" is appropriate for hash-like structures, i.e., key-value maps. The runtime can't optimize these anyway; they will always be "generic slow objects". Plus, delete is the only way to remove the key, otherwise your hash will only grow in size, and you'd store a lot of "null" values for no reason that you'd have to explicitly filter.
No, but modern javascript engines share many traits that make these tips generally relevant across engines (although they may be more or less true for specific engines and there may be exceptions) The author is clear that the article is focused on V8. Here's a similar link from microsoft describing optimization suggestions for its Chakra engine that makes very similar recommendations: http://msdn.microsoft.com/en-us/library/windows/apps/hh78121...
This is good stuff. As I'm working on my game for github's game off, I was thinking about writing something similar. It doesn't bother me that's it's V8-centric, because devs are going to write about what they know the best. My article probably would have been SpiderMonkey focused.
Not sure I'll write it anymore though, this seems to have most of what I was going to say!
EDIT: After reading the article in more details, there are a few things I think it misses. My focus is more on games though. I still might write another article.
I think this guy makes an interesting point about performance, related issues stemming from using Delete. However, maybe the proper advice would be to Google and other folks who optimize javascripts execution to improve the handling of Delete, because is a necessary feature of frameworks.
That's internally in jQuery though, which does you no good at all if you're invoking DOM manipulation methods yourself -- such as the example in the article, the code is calling append() inside of a loop. Each one of those calls has no knowledge of the other though, and so it cannot possible do this for you.
Avoiding nested anonymous functions is good for more reasons than listed in the article. The article lists closures holding references to the closed-over variables as being a source of wasted memory. Another point is the fact that anonymous variables require memory allocation each time they are created:
// Uses more memory:
Foo.prototype.someFunction = function () {
;[2, 5, 9].forEach(function (element, index, array) {
console.log("a[" + index + "] = " + element)
})
}
The function will be created every time the outer forEach loop is called.
// Uses less memory:
Foo.prototype.logArrayElements = function (element, index, array) {
console.log("a[" + index + "] = " + element)
}
Foo.prototype.someFunction = function () {
;[2, 5, 9].forEach(this.logArrayElements)
}
Save your created functions for later to save memory.
In both examples, the function is only allocated once, no matter how it's instantiated.
Where you have to be careful is if you have a function that's being called repeatedly and it contains a function instantiation -- that function will be reallocated each time.
[1,2,3].forEach(function(x) {
return (function(a, b) { return a * b })(x, 2)
})
Will repeatedly allocate that inner multiplication function, vs:
mul = function(a, b) { return a * b }
;[1,2,3].forEach(function(x) {
return mul(x, 2)
})
Will only allocate that multiplication function once.
I think we are in complete agreement. Your example is a little prettier however you still have this "unoptimized" line:
;[1,2,3].forEach(function(x) {
If this line is executed more than once, the nested function will be created again and again, wasting memory. However, some people may see my advice here as a micro-optimization. Thus, don't make your code ugly until you identify the real-world bottlenecks. I have updated my code in the grand-parent comment to explain myself a little more clearly (maybe)...
Why an expression and not simply `function logArrayElements`? If you do the latter you'll see more information if you wanted to introspect (toString on the function, or in an error traceback).
When discussing the efficiency of a language, especially an interpreted language, one must always take into account the peculiarities of the used VM. And be sure that today's optimisations may be "pessimizations" in the future (see the many Linux optimisations that are being removed in the last years).