Mobile phones have quite powerful GPUs even on the lower end devices. Many are based on ARM's Mali GPU series that has up to 28 shader cores. That's not going to compete with the hundreds of cores than a PC GPU has available but it's more than enough for a straightforward (but still very impressive) WebGL shader like the one in the link.
To be fair this shader was published in GPU Gems 15 years ago (2004). Back in the days of the GeForce 6 and Windows XP. Though the fact that it now fits in your pocket is cool.
Re: everyone impressed by how fluid it is with multitouch: The number of touch points doesn't affect the performance of a grid simulation. Using 10 fingers just sets high values at more places in the initial grid. The effort to solve each time step is the same after that. The computational effort scales with the resolution, not the boundary conditions/input.
It could be in part because the default settings don't let you see the persistent levels of turbulence throughout the entire grid. Set the density to .996 and velocity to 1 and vorticity high and you'll see just how crazy it can get.
I also tried high Density Diffusion - loved the slower movement and that made me think - only if there was a way to export that to a something like a moving wallpaper that I could set to my Ubuntu desktop.
Also, just setting Density Diffusion to 1 and leaving makes the movement slow enough to be able to screenshot nice wallpapers.
I do. Coming from Win 3.1 and due to restrictions on internet connectivity in our office then, at first I had no idea how to use it and thought it was some advanced feature.
Impressive demo! WebGL is an awesome piece of tech that only now starts to be largely adopted. I mean, it gives you access to 100s of cores to compute, run, display stuff :).
Too bad Apple doesn't move forward with WebGL2 and Google splits the community in 2 with webgpu.
95% of the current games use OpenGL / DirectX 11 over Vulkan/ DirectX 12. Vulkan seems to be a nice to have with lots of potential in my opinion. I have yet to find a killer use case for Vulkan, on the web in particular.
With WebGL, in my current projects, the limiting factors are the network, javascript and RAM more than the opengl driver overhead...Maybe the reason behind it is to find common grounds with Apple and supports several investment around Stadia's core technologies?
I don't know enough about fluid dynamics to be the judge, but it seems like the engine is using genuine (2D) physics and it's not just "toy physics." Could someone who knows fluid mechanics comment on this?
Sure. It's a correct implementation of the projection method [1] for solving the 2D incompressible Navier-Stokes equations, so it is in fact using genuine physics. In terms of physical accuracy, there are some caveats - it's using the Jacobi method (chosen for its simplicity) to iteratively solve the diffusion and pressure Poisson equations, so the accuracy will be determined by how many iterations you're willing to do. That approach won't scale to larger systems. There are also some limitations from the choice of collocated (as opposed to staggered) grids and the approximations used for the boundary conditions. Nevertheless, for a small 2D simulation, you can get physically accurate results as long as you use enough Jacobi iterations and have a sufficiently fine grid.
Also it uses backward differencing, it makes it stable at long timesteps. That is great to run fast for visualization, but it is horribly inaccurate (or possibly even plain wrong) if your system has large gradients in flow speed.
Yes, I've found that computer graphics focused fluid simulations frequently choose stability over physical accuracy. These choices also result in unphysically high numerical viscosity. (I didn't check what finite difference stencil or finite volume scheme the code uses, though I presume it's a lower order accurate one that probably has a fair amount of numerical viscosity.) In principle if you reduce the grid spacing and time step it will converge provided the software doesn't use any tricks like approximate square roots, etc.
The numerical methods for solving the PDEs that computer graphics folks use would surely be considered primitive by someone who develops engineering computational fluid dynamics softwares.
> I didn't check what finite difference stencil or finite volume scheme the code uses, though I presume it's a lower order accurate one that probably has a fair amount of numerical viscosity.
It's just the basic second order central difference. It also uses a first order approximation to the Dirichlet and Neumann boundaries, so that additional error will diffuse throughout the simulation region. It doesn't use any approximation tricks for square roots etc., so given appropriate floating point semantics (as a physicist I have no clue what the shader language specifies there) you can still get realistic and accurate results by reducing the spatial and time steps, which is easily doable for a small 2D simulation on modern GPUs.
Ultimately, though, all the basic "best practices" for simulations of this kind - staggered grids, higher order derivative approximations, etc. - aren't very complicated and are well described in any CFD textbook. What really makes engineering CFD software complicated are things like handling complex geometries with dynamically refined meshes, efficiently solving the resulting linear equation systems at scale and coupling the fluid dynamics to other physical phenomena while retaining numerical stability and accuracy.
Don't get me wrong: Doing all the approximations to get a fast, visually appealing animation is perfect for what this demo aims to do.
Fluid simulations for science or engineering (think airflow around the next Boeing design or simulations of planet formation) are still very hard. And that is not because physicists are stupid, bad at coding or easily replaced with machine learning.
Sorry, I was unclear. I agree that the demo does what it intended to do. I was just listing a few additional reasons to believe it may not be physically accurate as-is. I know from previous discussions on HN that many readers are interested in this.
Totally good idea to do that. I just want other readers to be aware for what applications that is useful and for what applications other techniques are more appropriate.
If I’ll have time for that, maybe I should add custom D3D-specific effects there that weren’t there in the original version. Fluid simulation is one of them. BTW it’s easier to implement with compute shaders. WebGL only support them on Chrome/Chromium and only on desktops, that’s why OP used some trickery with fragment shaders instead.
I clicked this thinking "yet another fluid simulation" but this is super cool. Played with every single parameter ;) Pausing, then drawing, then unpausing was a surprisingly pleasant effect.
Such a nice demo! I'm amazed by how smooth it's running on my phone. Even if we already have seen similar shaders back in 2004, this is running on the web, available to everyone, with no installation.
During my 3rd year at uni I played with webgl1 to create a fake fluid simulation, my demo looks crap nowadays http://jspdown.github.io/mod1/
This works with decent smoothness even on my 8 year old 35 dollar iphone 4s.
I met a fellow at Nasa Glen in the 90s that worked on fluid simulations using an sgi workstation. I don’t remember how long the simulations took to process but had to be several minutes per frame. I do remember those workstations cost well over 30 grand at the time.
I'd be a little surprised if multittouch affected performance significantly. It's probably simulating and rendering every cell every frame regardless. I couldn't notice a slowdown on my phone with five fingers.
Woah, you're right. I just tried it on my Pixelbook screen (keep forgetting it's touch screen), and it handling as many fingers as I could give it (all 10) with no lag.
I'm guessing the number of inputs doesn't really influence simulation speed since the shader runs each frame for each pixel anyway. When touching the screen, some pixels are changed to hot/colored, but that doesn't really change the amount of computation needed to simulate the whole screen.
This is gorgeous. It gives me a sense of joy and power when creating the vortices. The white glow seems to burn my eyes, even when I know my screen is not that bright. Feels like I'm a wizard.
Not quite standard. The trick is to use backward differencing in time, to make it unconditionally stable even for longer time steps. Also the Poisson equation to get the pressure is only solved approximately by doing Jacobi iterations until the solution is "good enough". And of course it is limited to 2d and the incompressible limit.
just a random recommendation: I love to connect this from a laptop to a TV screen, connect a wireless/long cable mouse, and play it with my cousins / little kids. it's a one time trick, but they always love it. (you can do the same with other programs / simulations, just enhancing the experience a little bit can do wonders)
It depends on google analytics and will break unless ga() is defined. Most adblockers spoof it for you, but if you're only blocking requests to trackers the javascript will fail. (eg pi-hole, uMatrix)