Fun trick: Pause the simulation with the space bar. Then, take your mouse and drag it on the surface of the water, you will notice it creates ripples but in the paused state.
Drag the mouse rapidly back and forth in a very tiny area so that it creates a layered 'ripple' that grows and grows. If you spend about 2 minutes doing this you can make the ripple go like 10 feet high completely off the screen.
Then unpause the simulation for a massive tsunami.
Rather than dragging, just click in the same spot hundreds of times to make a huge tower of water, then unpause to see a single ring of concentric waves and perfect wave reflection/interaction effects.
0,0 is the center, bounds are -1 to 1. 0.1 radius is a reasonable spike, 1 radius is a huge swell. 1 strength is really big. 10 is ridiculous. Try negative numbers too!
Doing this in a browser is a neat trick, but the actual state-of-the-art in realtime GPU fluids is slightly more impressive. nVidias FleX middleware is a good example:
<= NOT a graphics programmer, in this era anyway. Back in my graphics days getting Phong shading working on a 286 was amazing to me.
So, does that Flex system BASICALLY imply that in another few decades the individual granularity of those particles will become so small and so numerous that we'll basically be modeling liquids and solids at the molecular level? It seems to me that it's only a lateral step then to mimic the fluid like effects of explosive forces being applied to solids.
> ... that in another few decades the individual granularity of those particles will become so small and so numerous that we'll basically be modeling liquids and solids at the molecular level?
Also not a graphics programmer, but I am a scientist who has model liquids and solids at the molecular level–a lot of this is doable today, but I doubt it'll ever be applied directly to a macroscopic graphics engine. There are quite simply too many atoms: there's 1 mole (6e23 atoms) in 18g of water. At best, todays chips have several billion transistors (5e9). Even at one transistor per molecule, the numbers just don't add up.
There are lots of multi-scale modelling techniques though, where you directly model the system on the atomic scale in just a few small volumes, e.g. at the tip of a crack just before it propagates. It's not impossible that sort of thing will make an appearance, though I'd still be surprised. Graphics programming is all smoke and mirrors; if it's easier to approximate something using an unphysical process that looks right, people will favour that over an exact simulation that takes far more computing power.
Yeah. You bring up an excellent point. Smoke and mirrors. It won't be molecular...not with semiconductors anyway. Maybe when quantum holographic systems are operating our transporters it will be possible :D.
UNTIL THEN, All it has to do is get granular enough to fill the resolution of the current generation of monitor technology. Just like how ray tracing really only requires enough granularity to appear seamless at the current resolution.
Number of transistors is not really relevant here, since it still doesn't tell us anything about how many atoms we could simulate per second. If anything, we should be rather looking at FLOPS that can be generated by a given chip.
That video is breathtaking. I goofed off with N-body systems like probably most people and the size and performance of the systems shown there are light years ahead in comparison.
Some of those examples were clearly not real but some - the snickers bar being split - seemed pretty real.
I wonder if those very realistic shots are usable in UK adverts without disclaimers? As an example, see ads for mascara which get regulated if they use computer enhanced lashes and no disclaimer.
Actually, I was once talking to a friend that does photography for print ads: I was surprised to find out that most of the highly detailed stills of, say, lipstick or champagne coming out of a bottle are actual photographs.
I assumed many things were renders or that stuff was combined later in photoshop.
I thought that champagne overflowing from a bottle was not made with real champagne over and over again until you got the perfect shot.
I was also surprised to find out that when the background had a nice motion blur it was sometimes actual motion blur made with a rotating rig that had the camera and the object to be shot on it.
Regardless. It's awesome that a browser with WebGL can achieve that kind of speed and behavioral complexity. Is ray tracing the reflections part of OpenGL? Or is that a separate library?
The only minor nit I can come up with is the lack of surface tension. When you pull the ball up through the surface of the water, some of the water should stick to the ball. Maybe the ball is made out of lotus leaves, though.
I'd also expect some more bubbles when violently stirring the pool with the ball.
Unfortunately the effects that you describe are nontrivial. The method of water simulation that is used here doesn't lend itself to simulating the effects of things like water tension or breaking waves. This is because the water surface is represented as a 2D height-map, representing the displacement from equilibrium (i.e. the z coordinate as a function of x and y). It's clear that it is not possible to represent water clinging to the sphere or dripping off it.
A particle-based system could simulate the effects you describe (by keeping track of the positions of some large number of water "molecules" and exchanging forces between each pair of particles), however the simulation would be far too costly to run in real-time on typical hardware today. It also has its downsides. With a height-map based representation it is trivial to calculate the normal to the surface of the water, which is necessary for the lighting effects, but with a particle system you would need to reconstruct the surface of the water in an additional step each frame, using something like the marching cubes algorithm.
A real time calculated displacement map would be more appropriate. These simulations typically use real wave-front physics (Navier-Stokes), even if simplified.
I've been using this to test whether I've got WebGL acceleration working properly for ages. I'm pretty sure I've seen this posted a couple of times before though :)
That's awesome, and holy cow I think this is the first time I've ever seen Chrome on Linux actually render WebGL. Did they finally enable it by default in recent updates?
I've been playing with WebGL demos on Ubuntu for years, first Chrome then Firefox. It isn't new. You might also be impressed to find out that Steam is on Linux now too.
I remember trying to run this on my phone previously and it not working. I also remember it causing my cup fan to fire up when I played with it on my laptop. The fact that the iPhone 6 runs this so well is pretty dang cool.
Same here! I remember trying it on an older iPhone and android phone and it was choppy. Just tried it on my iPhone 6 and it's so smooth. I love seeing that progression in technology first hand.
Does anyone happen to know what the effect is called that the sky is reflected stronger at shallow angles and whether that is the same effect which makes rough surfaces reflective at very shallow angles?
It looks like it's only simulating the surface. Moving a ball that of that relative size in that volume of water would cause all sorts of surface features, if the full volume of water was being simulated.
Yeah, I just tested it on my mid-range android phone with Chrome. It works surprisingly well, though it lags a bit it's still usable. Really all webgl stuff should work on mobile since it was designed with that in mind. Ironically it probably wouldn't work on my old desktop with the Directx9 capable nvidia card.
Right. Refraction only occurs on the surface of the water, like the waves; the views through the sides are what you'd see from under the water at that point, not looking through a viewing window.
Well, if you want to get really technical, what you'd see from underwater would be much worse than what a camera sees, since your eyes require air to properly focus.
So if they wanted to make it what you would see underwater, they should just blur the crap out of it.
Nope, it's much simpler than that. It's just a grid of vertical positions and each frame every cell moves toward the average position of the neighbors from the previous frame. See http://freespace.virgin.net/hugo.elias/graphics/x_water.htm for details.
Drag the mouse rapidly back and forth in a very tiny area so that it creates a layered 'ripple' that grows and grows. If you spend about 2 minutes doing this you can make the ripple go like 10 feet high completely off the screen.
Then unpause the simulation for a massive tsunami.