Hacker News new | past | comments | ask | show | jobs | submit login

JavaScript is faster in most cases than Objective-C.

(I'm not going to dispute that cross-platform toolkits can have less native fidelity than compared to coding to the native toolkit can. I just don't like the Objective-C vs. JavaScript performance myth.)




Huh? What's the Objective-C vs. JavaScript performance myth?

Here's a link showing Objective-C beating JavaScript handily: https://medium.com/@harrycheung/mobile-app-performance-redux...

But is this even a debate? Wouldn't you expect a compiled, manual memory managed, language to be faster than an interpreted language with garbage collection?


That's an interesting benchmark, and I'd need to dive into the details to see what is going on. Perhaps there is some sort of JIT slow path. I would not expect method-heavy Objective-C to beat JavaScript. In general:

> But is this even a debate? Wouldn't you expect a compiled, manual memory managed, language to be faster than an interpreted language with garbage collection?

Objective-C is not compiled in terms of method dispatch, nor it is manually memory managed. Instead, all method dispatch happens through essentially interned string lookup at runtime, backed by a cache. Objective-C also has a slow garbage collector--atomic reference counting for all objects. (Hans Boehm has some well-known numbers showing how slow this is compared to any tracing GC, much less a good generational tracing GC like all non-Safari browsers have.)

The method lookup issue has massive consequences for optimization. Because JavaScript has a JIT, polymorphic inline caching is feasible, whereas in Objective-C it is not. It's been well known in Smalltalk research since the '80s that inline caching is essentially the only way to make dynamic method lookup acceptably fast. Moreover, JavaScript has the advantage of speculative optimization: when a particular method target has been observed, the JIT can perform speculative inlining and recompile the function. Inlining is key to all sorts of optimizations, because it converts intraprocedural optimizations to interprocedural optimizations. It can easily make 2x-10x of a difference or more in performance. This route is completely closed off to Objective-C (unless the programmer manually does imp caching or whatnot), because the compiler cannot see through method lookups.

Apple engineers know this, which is why Swift backed off to a more C++-like model for vtable dispatch and has aggressive devirtualization optimizations built on top of this model implemented in swiftc. This effort effectively makes iOS's native language catch up to what JavaScript JITs can already do through speculation.


Thanks for the detailed response, to summarize it sounds like your position is that:

1. Objective-C's compile time memory management is actually slower than JavaScript's garbage collection.

2. The performance consequences of Objective-C message sending are greater than JavaScript's JIT compilation. And furthermore, that JIT compilation is actually an advantage due to the other optimization techniques it enables.

I'd like to see a more direct comparison with benchmarks, but I can see where you're coming from.


Right. Note that this advantage pretty much goes away with Swift. Swift is very smartly designed to fix the exact problems that Apple was hitting with Objective-C performance.

I realized another issue, too: I don't think it's possible to perform scalar replacement of aggregates on Objective-C objects at all, whereas JavaScript engines are now starting to be able to escape analyze and SROA JS values. SROA is another critical optimization because it converts memory into SSA values, where instcombine and other optimizations can work on them. Again, Swift fixes this with SIL-level SROA.


Actually JavaScript is compiled JIT by most engines these days. While it's not the fastest, it certainly can be quite fast.


I see, good point, but then I'd expect the cost of JIT compilation would still have a performance cost? As opposed to Objective-C being compiled before distribution to the client?


JIT compilation cost is minimal in practice due to tiered JITs.


Source please, because unless you have anything to back this up, your claim can only be considered grade-A FUD.

Sometimes I feel like using JavaScript too much transports people to some kind of imaginary JavaScript fairy land full of rainbows and unicorns. I read the craziest things about JavaScript development even though I can't comprehend why so many people believe it has any redeeming positive qualities over other languages, besides ubiquity.


See my reply to the sibling comment for the explanation. There haven't been enough cross-language benchmarks here to say definitively, but as a compiler developer I can tell you the method lookup issue is really fundamental and in fact is most of the reason for Swift's (eventual) performance advantage over Objective-C.


I've read your explanation, but I'm not convinced they support your assumptions (which, barring any benchmarks that make them factual, is what I consider them to be).

I'm aware of the dynamic dispatch overhead of Objective-C, but first of all it's my understanding that Apple's Objective-C runtime & compiler perform all kinds of smart tricks to reduce the overhead to a minimum (caching selector lookups and such), and second, because Objective-C does not require you to use dynamic dispatch if performance is a concern. No one is preventing you from doing plain-old C-style functions for performance criticial sections.

I also don't buy the 'ARC is slower than GC' argument. ARC reference counting on 64 bit iOS, as implemented using tagged pointers, has almost zero overhead for typical workloads. Only if you would write some kind of computational kernel that operates on NSValue or whatever (which is a dumb idea in any scenario, about as dumb as writing such a thing in JavaScript) you would ever even see the difference between not having any memory management at all. Just like your other performance claims: without data, there is nothing that backs up your statement that ARC is slower than GC for typical workloads. Hans Bohm is not the most objective source for such benchmarks by the way.

Apart from that you seem to spend an awful lot of effort explaining the things that would make Objective-C slower than JIT'ed JavaScript, while completely disregarding the overhead all this JIT'ting, dynamic typing, etc has, and the fact that in JavaScript you basically have no way to optimize your code for cache friendliness or whatnot.

You may be a compiler developer, but based on your comments I'm highly doubtful you are aware of how much optimization already went into Apple's compilers, which greatly reduce the overhead of dynamic dispatch and ARC.


> second, because Objective-C does not require you to use dynamic dispatch if performance is a concern. No one is preventing you from doing plain-old C-style functions for performance criticial sections.

That's just writing C, not Objective-C. But if we're going to go there, then neither does JavaScript. You can even use a C compiler if you like (Web Assembly/asm.js).

> ARC reference counting on 64 bit iOS, as implemented using tagged pointers, has almost zero overhead for typical workloads.

No, it doesn't. I can guarantee it. The throughput is small in relative terms, I'm sure, but the throughput would be smaller if Objective-C used a good generational tracing GC.

(This is not to say Apple is wrong to use reference counting. Latency matters too.)

> Just like your other performance claims: without data, there is nothing that backs up your statement that ARC is slower than GC for typical workloads. Hans Boehm is not the most objective source for such benchmarks by the way.

There's not much I can say if you're determined to discount real numbers solely because Hans Boehm provided them. But here you go ("ARC" is "thread safe"): http://www.hboehm.info/gc/nonmoving/html/slide_11.html

Anyway, just to name one of the most famous of dozens of academic papers, here's "Down for the Count" backing up these claims: https://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-... Figure 9: "Our optimized reference counting very closely matches mark-sweep, while standard reference counting performs 30% worse." (Apple's reference counting does none of the optimizations in "Down for the Count".)

Here's the Ulterior Reference Counting paper, showing in table 3 that reference counting loses to mark and sweep in total time: http://www.cs.utexas.edu/users/mckinley/papers/urc-oopsla-20...

memorymanagement.org (an excellent resource for this stuff, by the way) says "Reference counting is often used because it can be implemented without any support from the language or compiler...However, it would normally be more efficient to use a tracing garbage collector instead." http://www.memorymanagement.org/glossary/r.html#term-referen...

This has been measured again and again and reference counting always loses in throughput.

> You may be a compiler developer, but based on your comments I'm highly doubtful you are aware of how much optimization already went into Apple's compilers, which greatly reduce the overhead of dynamic dispatch and ARC.

I've worked with Apple's compiler technology (clang and LLVM) for years. The method caching in Objective-C is implemented with a hash table lookup. It's like 10 instructions [1] compared to 1 or 2 in C++, which is a 3x-4x difference right there. But the real problem isn't the method caching: it's the lack of devirtualization and inlining, which allows all sorts of other optimizations to kick in.

Apple did things differently in Swift for a reason, you know.

[1]: http://sealiesoftware.com/msg/x86-mavericks.html


That's again a lot of information showing why Objective-C is not the most efficient language possible for all use cases, but it still does not provide any evidence why JavaScript would be faster. I'm not disputing the individual points you made, but in the context of comparing overall performance of Objective-C vs. JavaScript it doesn't say much at all. It mostly shows Objective-C will always be slower than straight C/C++, nothing about JavaScript performance. I appreciate the thorough reply though.

One thing I do want to make a last comment about is your dismissal of using C/C++ inside Objective-C programs as some kind of bait-and-switch argument. Using C/C++ for performance critical sections is IMO not the same as calling out to native code from something like JavaScript, or writing asm.js or whatever other crutch you could use to escape the performance limitations of a language. As a superset of C, mixing C/C++ with Objective-C is so ingrained in the language you have to consider it a language feature, not a 'breakout feature'. Nobody who cares about performance writes tight loops using NSValue or NSArray, or dispatches millions of messages from code on the critical path of the applications performance (which usually covers less than 10% of your codebase). As an example, I'm currently writing particle systems in Objective-C, but it wouldn't even cross my mind to use anything but straight-C arrays and for loops that operate directly on the data to store and manipulate particles. This is nothing like 'escaping from Objective-C', as all of this is still architecturally embedded transparently inside the rest of the Objective-C code, just using different data types (float * instead of NSArray) and calling conventions (direct data access instead of encapsulation). It's more like using hand-tuned data structures vs STL in C++, than like calling native code or writing ASM.js from Javascript.


No, it's not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: