Hacker News new | past | comments | ask | show | jobs | submit login

Well, the short answer is: hard to say. A lot of work is off-loaded on GPU's now days, and theoretically there should be no reason why talking to the video-card through a browser should be slower than talking to it through a portable executable (.exe). The reality of it is that there is far more sandboxing and verification that needs to be done since no one likes having their browser crashed out of the blue. As for performance of languages (JavaScript vs C or x86), naturally, you will always get more performance the closer to the iron you are but it's getting fairly blurred and compilers/interpreters are pretty damn smart these days.

I realize I'm not answering your question entirely, but what I read between the lines is you're wondering whether WebGL will be able to replace "native" applications, performance-wise in the future. The ball is in the air on that one, and it's about to be caught by Intel/AMD/nVidia. And I bet you they are already cuddling with the browser-developers (or at least rubbing their hands in glee).

I guess the quick and dirty test for the state of things right now is to simply check the CPU/GPU performance on this one and compare it with current top of the month here: http://pouet.net/prod.php?which=56871 (note that this is a 64KiB demo -- in comparison, the background picture of http://romealbum.com/ is three times bigger).

A more fair comparison would be: http://pouet.net/prod.php?which=56900 (also released this month, it seems)

Do let me know if you come to any kind of conclusion :)




I think it much depends on how much work you're doing. If you're just viewing a static 3D model and doing some scaling or rotation, it's not so bad but if you're adding/removing/translating objects, it's going to be ass slow in JavaScript. One's best bet is probably to write your code procedurally and hope that V8 can compile it to machine code really well.

WebGL is really good and it's only going to get better as long as the creators of the browsers make enough money to subsidize the optimization.

It may be that Chrome/V8 need to start using LLVM.


> It may be that Chrome/V8 need to start using LLVM.

LLVM is far too heavy to be used in such a situation -- page load times would rise dramatically. Really, V8 just needs to start doing hot spot optimizations. Do a quick first pass like they're doing now, then incrementally optimize away the hot spots. If I had to take a guess, I'd say that'll be coming in the next year, in some form or another.


I believe, hot spot optimization is what the new crankshaft infrastructure was about, or are you thinking of something else?


Ah yes, I forgot all about that. Guess it's not a prediction if it's already come true. Anyway, more focus will be put on such things, as we're about as far along as we can get with the initial fast compilation.


I'm used to it. I invented alphanumeric pagers in high school about 5 years after they were available (I had no idea they existed...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: