Hacker News new | past | comments | ask | show | jobs | submit login

How does the running performance of WebGL compare to what would be possible if it was a native app? Are there inherent limitations of implementing this in the context of the web that will keep it much slower or will continuing optimization and things like hardware accelerated canvas elements allow achieving near parity (>80%) in performance?



Well, the short answer is: hard to say. A lot of work is off-loaded on GPU's now days, and theoretically there should be no reason why talking to the video-card through a browser should be slower than talking to it through a portable executable (.exe). The reality of it is that there is far more sandboxing and verification that needs to be done since no one likes having their browser crashed out of the blue. As for performance of languages (JavaScript vs C or x86), naturally, you will always get more performance the closer to the iron you are but it's getting fairly blurred and compilers/interpreters are pretty damn smart these days.

I realize I'm not answering your question entirely, but what I read between the lines is you're wondering whether WebGL will be able to replace "native" applications, performance-wise in the future. The ball is in the air on that one, and it's about to be caught by Intel/AMD/nVidia. And I bet you they are already cuddling with the browser-developers (or at least rubbing their hands in glee).

I guess the quick and dirty test for the state of things right now is to simply check the CPU/GPU performance on this one and compare it with current top of the month here: http://pouet.net/prod.php?which=56871 (note that this is a 64KiB demo -- in comparison, the background picture of http://romealbum.com/ is three times bigger).

A more fair comparison would be: http://pouet.net/prod.php?which=56900 (also released this month, it seems)

Do let me know if you come to any kind of conclusion :)


I think it much depends on how much work you're doing. If you're just viewing a static 3D model and doing some scaling or rotation, it's not so bad but if you're adding/removing/translating objects, it's going to be ass slow in JavaScript. One's best bet is probably to write your code procedurally and hope that V8 can compile it to machine code really well.

WebGL is really good and it's only going to get better as long as the creators of the browsers make enough money to subsidize the optimization.

It may be that Chrome/V8 need to start using LLVM.


> It may be that Chrome/V8 need to start using LLVM.

LLVM is far too heavy to be used in such a situation -- page load times would rise dramatically. Really, V8 just needs to start doing hot spot optimizations. Do a quick first pass like they're doing now, then incrementally optimize away the hot spots. If I had to take a guess, I'd say that'll be coming in the next year, in some form or another.


I believe, hot spot optimization is what the new crankshaft infrastructure was about, or are you thinking of something else?


Ah yes, I forgot all about that. Guess it's not a prediction if it's already come true. Anyway, more focus will be put on such things, as we're about as far along as we can get with the initial fast compilation.


I'm used to it. I invented alphanumeric pagers in high school about 5 years after they were available (I had no idea they existed...)


Once you get the data and the shaders over to the GPU, its going to draw at pretty much full speed. That said, penalties for running on the web include:

Even a well-written JS app on a great JIT is going to have trouble keeping up with an equivalently well-written native app that has access to SIMD, cache-aware memory layouts, prefetching and other native features. My CPU can do 200 million matrix x vector multiplies per second using SSE3. In Firefox4 JS it can do 20 million. Impressive, but not equivalent.

On Windows, you are probably running GL on top of a D3D translation layer. That adds overhead to an API that you sometimes want to call into 60,000+ times per second. The GLSL to HLSL shader translation step is a big pain during load times. I'm not sure how much of a run-time penalty it causes.

Web GLSL is somewhere between HLSL Shader Model 2 and 3 from 2003. The latest desktop GPUs run Shader Model 5 which has many sophisticated features and performance opportunities. However, many of those are a bit too sophisticated for casual 3D programmers. Hopefully WebOpenCL will come eventually and shrink that feature gap.


Don't get me wrong, I'm excited about WebGL, etc. But I've got a brand spanking new machine that churns through every task I've thrown at it... except WebGL demos. I'm running the latest Chrome and it's simply choppy as hell, despite the low polygon count.

Why are we bending over backwards to modernize a bunch of antiquated document technologies?

I quite like Javascript, but it is rapidly becoming byte code. If you considered "compilation target" to be a JS design goal, I'd consider Javascript to be an abject failure. Similarly, while I realize WebGL is in its infancy, any 3D rendering layer that, by design, cannot approach the performance of Direct3D for simply pushing polygons (not even talking about programmable pipelines) is also an abject failure.


I'm also running the latest Chrome on Linux, on a cheap laptop (although it does have 2 cores). I could run it with no problems and no visible system slowdown.

I guess it also depends on the software used, on drivers, etc ... the technology is very new and still a work in progress.

    by design, cannot approach the performance of Direct3D 
And why is that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: