Please ignore the bad reporting / marketing spin and focus on what actually Chrome team did. The change is significant and important.
It doesn't really matter that your current web applications will have more or less the same speed. Removing a huge performance bottleneck [1] paves a way for completely new type of applications, so far unheard of on the web (way beyond what's possible with Flash [2]).
And other browsers will follow, Firefox already has its own solution on the way.
Trust me, this is big news, though the effects will be felt only in some time. Developers have to catch up. Even "oldfashioned" unaccelerated 2d canvas is still very far from reaching its full potential.
-------
[1] This bottleneck was coming mainly from compositing of classical software rendered web content and new hardware accelerated content. New way is to do everything HW accelerated (that's BTW why fonts look bit off in new Chrome and Firefox).
[2] Just wait till WebGL will be enabled by default in upcoming Firefox 4 and Chrome 7. There is a marked difference in capabilities, shaders are crazy powerful.
This is right on the money. People will be blown away by what google maps will look like after enough browsers have WebGL capabilities and the maps team switches to that technology instead of the crude javascript tiling that they rely on right now.
Unfortunately the companies that they (Bing, Yahoo, Google, etc) buy the map data from would be very much against pushing their precious vector data out into the public. All of the licensing is setup and specifically deals with tiling maps. It will take years and years before the lawyers agree to it and figure out the licensing details. Perhaps Google can circumvent this in the US because they own their US data, but what about the rest of the world?
Hey, I'm struggling with that right now. Simple calls to Canvas fillText produce text that is way too heavy in Chrome. (The same calls work ok in FF.) I must be doing something wrong because Bespin uses Canvas and their text looks normal. Any guesses as to what it might be?
The people who are complaining that this speedup only applies to 2D canvas are missing the point.
HTML rendering itself is already "fast enough." All of the perceptible delay when loading a webpage is network latency, rendering is already pretty much instantaneous on all modern browsers.
What browsers can't do right now is enable webapps with native speed and experience. And that is what Google is targeting squarely, with this in combination with Native Client.
If HTML5 is ever going to be a real Flash replacement, it's advances like these that will make it happen.
HTML rendering itself is already "fast enough." [...] rendering is already pretty much instantaneous on all modern browsers
Strongly disagree. HTML rendering is terribly slow at certain things (especially in Firefox). Unfortunately they're the things our app's usability depends on, so we've had many struggles in this department. Anyone who's ever innocently accessed a DOM element's offsetLeft and wondered why their code got 100x slower will know what I mean.
DOM rendering is so far from instantaneous that it's scandalous. But optimizing Javascript implementations for benchmarks is easier, sexier, and more fun, so that's where the energy has mostly been going. I'd say it's actually the JS's that are "fast enough" at this point, but of course different apps have different needs. Thank goodness Google have been pushing the envelope in every department with Chrome and not just one.
Programmatic manipulation of the DOM does need a lot of work, yes. But that's not "page rendering" per se. Taking HTML and rendering it is pretty quick in most modern browsers. Mucking about with the DOM and having it refresh efficiently is a related but different problem.
html rendering is nowhere near fast enough, when building rich interfaces ensuring the ui is responsive while having to modify it extensively can be really really hard, if we still have to use tricks like optimising css selectors, using fragments to remove and add stuff from the dom and absolutely positioning everything because triggering redraw is still the bottleneck then I dont think rendering is near fast enough yet.
The answer was to provide better tools that allowed users to analyze/interact with the data without having to render 10k rows to the screen.
It's not like they were able to read them all anyway. I'm going to go out on a limb and say unilaterally that anything that's just dumping 10,000 rows to the screen has, a priori, terrible usability.
It's not really a case I feel browser designers ought to spend a lot of time optimizing for.
That's true, but there wasn't time (i.e. money) to design something better.
It started much much much smaller, but they used it so much it grew far beyond it's initial design, and we were never able to come back to it to do something better. (It's an internal tool BTW, so we, i.e. they, just dealt with it.)
How much of that time is spent loading? Have you tried saving & opening locally? Is the data being handled in any way by javascript, which could be inefficient (for instance: tons of table tools use JS hover effects, instead of CSS).
Am actually interested, this is not to say it's not actually slow. Just trying to find out why / how slow it is.
It was local, php generated output from a database. (And the php wasn't slow, saving to a file was almost instant.)
I think it was just a totally barebones table, with tr and td of many rows from a table. I also had another one where it made an editable form from the data (the data had a bit more structure than the first, some css, but no onload javascript (just onclick)).
It was handmade, so no hover effects or other things.
Firefox was really slow and my customer eventually switched to using chrome for that page. But it's still slow, just not as slow.
I remember having my browser freeze totally for about 10 minutes (yes I timed it).
Still give it a try from a static HTML file, it could be different.
The 10 minutes sounds like more of a bug than anything else. I've loaded 20mb heavily-styled "books" in HTML that took ~10 seconds to load, but scrolled like molasses, but I've never seen a large table behave that poorly (though I've only handled 1-2 thousand rows).
I don't have experience with tables as large as yours, but I have noticed that if you give the browser hints about the columns using the <col>-tag, it loads larger tables faster.
It's true that this won't make Javascript or HTTP any faster, but it's still very exciting to me. This is about making Canvas faster. That's great because nearly all of the canvas games/demos have been spending around 90% of their cycles waiting on Canvas and only 10% in Javascript. As great as Canvas is, so far it has only been able to replicate games from the era of the 2Mhz SNES. With this change, we may be able to reach the 30Mhz PlayStation 1 era (only 15 years behind the curve!).
I think the point is that the article itself is not making sense, not the news it is trying to report.
When I read the article I also thought about just normal browsing page load times. That is already so fast that a 60x increase is hardly noticeable. That just makes the news trivial and the chromium team look like freakish geeks with nothing better to do than crunching numbers that only a machine can read.
That is until I looked at the demo video, which I missed on the article and only found about here. I can see that, particularly with that photo wall and the painter thing, snappiness is what would make them work.
A very specific subset of 2d graphics performance it seems: 2d animation using certain kinds of transforms. Rendering large/complex SVGs isn't getting 60x faster.
And this is just one part of page rendering. This part may already be neglectable, representing fractions of a second, drowning in the parsing and structuring of the output done prior to actually rendering it.
To me, all of this is just windbagging and vacuous marketing - just like the (almost entirely) pointless Javascript boasting.
I think the point is that "up to 60x" in the Chromium Blog post should have been "up to 70x" or "over 60x in some cases." (But actually this depends on how the 0.5 number was rounded...)
As rendering will get much faster, I guess we can consider JS will much faster too, at least when using canvas with JS, which I believe it's the most important bottleneck right now.
How would this acceleration work with Windows Remote Desktop? It would work with VNC as it grabs the whole screen, but RDC replaces the video driver and it stops being accelerated.
Newer versions of remote desktop know how to pass through some hardware accelerated things, if you have Vista+ on both ends. Or so I've read. I don't know at what layer that mechanism applies though. It may be direct3d calls, or it may be hooking into the application layer to work with WPF.
We are on Vista, and we have problems with D3D. Another problem, even without RDC is the security that IT put to log off the machine if it's not used for 15 minutes. That's okay, but certain D3D applications would lose the device, and would not restore it correctly.
(That to be said, it's just way easier to write the said applications without thinking of recovering from D3D LOST DEVICE. It's real pain in the ass to get this working, especially if you have to put 1GB of vertex buffer/images data in memory, and expect to be there).
They have a mechanism to detect if it's present or not. For example, some cards don't have OpenGL drivers so they created ANGLE to fix that. They also have software rendering which can be reenabled if needed.
sorry, the unstated and unrealized premise of my reply was that this is good and great, but that there are also legacy issues of what technology people actually will use the web browser on.
ChromeOS for example may mostly be on netbooks without GPUs.
However, at the same token this might be a good catalyst for even cell phones and netbooks to have them, as the iPad seems to [quick google search] so ultimately GPU may be the killer stat on mobile, although it probably is already ,i'm just a rambling software dev
Netbooks have GPUs. The Intel GPU is probably most common and least performant, but it can still accelerate OpenGL to faster-than-software speeds. And, new netbooks are shipping with nVidia Ion, which is quite fast. (I can play 1080p h.264 on my nettop with an Ion GPU.)
It doesn't really matter that your current web applications will have more or less the same speed. Removing a huge performance bottleneck [1] paves a way for completely new type of applications, so far unheard of on the web (way beyond what's possible with Flash [2]).
And other browsers will follow, Firefox already has its own solution on the way.
Trust me, this is big news, though the effects will be felt only in some time. Developers have to catch up. Even "oldfashioned" unaccelerated 2d canvas is still very far from reaching its full potential.
-------
[1] This bottleneck was coming mainly from compositing of classical software rendered web content and new hardware accelerated content. New way is to do everything HW accelerated (that's BTW why fonts look bit off in new Chrome and Firefox).
[2] Just wait till WebGL will be enabled by default in upcoming Firefox 4 and Chrome 7. There is a marked difference in capabilities, shaders are crazy powerful.