Hacker News new | past | comments | ask | show | jobs | submit | mishurov's comments login

Apple consumerist NPC followers will find excuses for it anyway.


They wrote a book based on that course and notes. https://www.cs.ubc.ca/~rbridson/fluidsimulation/

It's for computer graphics. Fluid simulation for, for example, simulating air pressure on an aircraft in development is more precise and numerical methods are more complex.


I'm kind of curious about those models. Are there any major differences aside from the assumption of incompresssibility? I suppose they probably don't do equation splitting either?

What does the CFD community do? Something like the Finite Element Method? I sat in on a grad course on continuum mechanics from the mech eng. department, but we never strayed from abstract mathematics.


In CG fluid simulations, one either use grid-based Eulerian methods (Bridson’s book uses a thing called a MAC grid), or use particled-based Lagrangian methods such as SPH (smoothed particle hydrodynamics). They all use the Navier-Stokes equations and the incompressibility condition; the difference is how you approximate it (with tradeoff between realism/performance). Each method has its own quirks (such as the PIC method suffering from unwanted viscosity, and FLIP suffering from numerical instability.) Nowadays the grid-based people use APIC a lot, because it seems to solve both the disadvantages of the two (you can see the links in the comment above)


There's a new trend of using deeplearning to replace and mix the solvers. Some ongoing research looks at tackling the compressible flows. Accelerating lattice boltzman methods with DL is also under study at various labs. See https://github.com/jolibrain/fluidnet_cxx for a reimplementation of fluidnet with aten/Pytorch tensors.

The literature on DL + cfd is growing steadily with some interesting papers at machine learning conferences. We are seeing the first set of applications in industry as well, very exciting !


Fluid dynamicist here. I wasn't aware of the machine learning work you've mentioned. They're addressing an important problem, but I'm disappointed in the paper based on a brief look. I see no comparison against experimental data (validation). Indeed, the cases the authors compute most likely have no corresponding experiments, but do look cool. I'd recommend that the authors learn more about verification, validation, and uncertainty quantification for fluid dynamics. And I'd encourage all machine learning folks approaching CFD to present your work at both fluids and machine learning conferences. I think problems like this would be reduced or fixed entirely by more interactions with fluid dynamicists.

Another smaller recommendation. What they call a "MAC grid" is called a staggered grid or mesh by CFD folks in my experience, so it might be better to use this terminology instead. The reference they cite is also out of date. I'd recommend something like this instead: https://www.sciencedirect.com/science/article/pii/S002199919...

This newer paper has higher order extensions of the method the 1965 paper uses and as I recall goes into much more detail about the properties of the schemes.


There are many recent works originating from the CFD community now. Too many to list here, you can PM me if you don't find them easily online for some reason.


> I'm kind of curious about those models. Are there any major differences aside from the assumption of incompresssibility? I suppose they probably don't do equation splitting either?

Other folks have already discussed differences in numerical methods, so I'll discuss the other major difference: turbulence modeling.

I don't think that turbulence modeling is addressed in computer graphics, as physical accuracy does not seem to be a major priority. The word "turbulence" or variants of it does not seem to be in the linked notes.

From an equations perspective, the "raw" Navier-Stokes equations are used for "direct numerical simulation" (DNS). The resolution requirements (e.g., grid size and time step) to accurately simulate turbulent flows makes the computational cost very high for all but the most trivial flows. Using a larger grids and time steps reduces the accuracy far too much. So instead of solving the Navier-Stokes equations, typically one of two different sets of equations that are derived from the Navier-Stokes equations are solved. These are the Reynolds-averaged Navier-Stokes equations (RANS equations; a statistical approach) dating back to the 19th century and the Large eddy simulation (LES; applying spatial filters instead of averages) equations, dating back to the 1960s. The RANS equations compute time or "ensemble" averaged quantities typically. The LES equations compute a filtered version of the fields, including only the large scales on the grid. These equations have lower computational requirements, but include new "unclosed" terms that require modeling. LES is typically viewed as more credible, though in my experience RANS computes the quantities you typically want. Well designed LES schemes will converge to DNS as the grid is refined; this is not true for RANS.

Turbulence modeling unfortunately has not proved to be as successful as it needs to be. Turbulence modeling might be impossible in some sense, as there's no reason to believe that the information one has available can be used to estimate the information one needs to accurately model turbulence. I view these models as requiring empirical data and not generalizing well.


Thank you very much for this response. I found it helpful.

I've been reading some of the research papers on waves and fluids from the late-1800s. One I went through a couple weeks ago was Reynold's 1883 paper on turbulent flow [1]. It's interesting going through old papers. They're a lot more casual and meandering than modern ones, and I feel like I get more insight into how the sausage is made that way.

I still wonder about some things, though. I'm familiar with the main CG fluid techniques, and it seems that they are used in real scientific simulations on occasion. For example, Smoothed-Particle Hydrodynamics have been used in ocean wave simulations, and they appear to validate against wave tank experiments. I actually was going to use them myself for simulations of wave-swept environments.

But, aside from bumping into Bridson at a conference, I haven't gotten many chances to speak to someone who really knows fluids well. I was wondering if you'd be willing to answer some more questions of mine about how the CG methods compare to DNS, LES and RANS. If so, send me an email and maybe we can chat about it. My address is just my HN username at gmail.com.

[1]: https://royalsocietypublishing.org/doi/abs/10.1098/rspl.1883...


In short, the two fields just look similar, but are actually extremely different fields.

Physical simulations need to preserve entropy, maximum principle, energy conservation and other kinds of conservation, preservation of consistent states, convergence in case of finer mesh.

There are multiple equations which model different forms of fluid: 1. Incompressible Euler (For liquid) 2. Compressible Euler (For non-viscous gases) 3. Navier Stokes Equations (For viscous liquids)

There are multiple solver methods: 1. Finite Difference 2. Finite Element 3. Discontiguous Galerkin Finite Element 4. Finite Volume Method

There are multiple equation methods: 1. equation splitting is just one of the many methods possible.

Just because the equation is unique does not mean that the solution is unique. Single equation provably have multiple and even infinite solution for the same initial condition. Computer graphics fluid simulation does not care (with a good reason) about this and hence, often their simulations even though they look kind of nice, are often incorrect since they do not demonstrate various physical characteristics that must be preserved.

In contrast, the qualitative/quantitative constraint in physical simulations are very strict. You need to know a lot of theoretical math to even understand if you are even computing the correct solution.


For using a numerical solver e.g. Runge-Kutta you need nothing to master, statistics operations master.


It's actually a nifty idea, could be implemented as something like blockchain in terms of distribution and syncing diffs among peers.


Hey, maybe work geolocated "check-ins" into it, and a gig model where you can pay someone to do your merges!


I'm sure you can figure out some way to work deep learning into it. How are you ever going to disrupt anything with a buzzword density below five nines?


> distribution and syncing diffs among peers

That's the stated purpose of git. Git syncs diffs, distributedly. The whole point of the article was that we have technology that works and is very well supported, so we should use and improve those rather than reinvent them for the sake of a new technology.


Git and BlockChain are both based on the Merkle tree data structure. So you could pretty easily prototype a text-based diff/merge-able schema with a single file via git, and then implement the protocol as a smart contract when you figure out the semantics. Multi-file might map to a KeyValue store such as what's implemented with IBM HLF/Composer. I guess something like the protobuf schema evolution rules would be good inspiration as well.


No criticism intended but how is git based on a merkle tree? Commits reference each others hash to build a DAG, but in a merkle tree, if I understand correctly, the non-leaf nodes are actually only having the purpose of simplifying a hash check of the underlying leaf nodes, a tradeoff between processing (checking all the hashes) and storage space (storing the non-leaf nodes additionally to the actual data). Just learned about a few days ago though, might be incorrect.


The funny thing is that a huge part of the world hasn't realized yet that a blockchain is in fact a p2p network with the addition of increased trust in the form of proof-of-work. If you could define work on a bug, providing a changeset, reviewing a changeset, running tests on a changeset as proof-of-work, you could actually define a development system based on a blockchain.


How will Linux ever recover? I want kernel to be based and red-pilled.


this. I was just about to write that the tutorial should be shown to every Electron developer.


GPU accelerates training time considerably. For development it's very convenient to use a Linux laptop with nVidia Optimus. I was able to train a quite complex model on laptops's nVidia (optirun process) while continue working using Intel SOC's GPU. However GPU memory is crucial it's better to choose a laptop with 4GB or more video RAM.


I've found that many models require 8GB or more; the T1080 cards are "just about" good enough


He's a great teacher but I prefer to see linear algebra from the point of view of linear maps/operators/transformations and matrices as their representations. His approach goes rather from matrix arithmetic as far as I know.


To each their own, but I think there is value in seeing them multiple ways. The rows can be looked at as a system of equations. The columns can be vectors to be combined. The matrix as a whole can represent the covariance of a Kalman filter or describing a confidence ellipsoid. They can be an adjacency matrix describing the edges of a graph. They can be your entire data set, and so on. Some of these uses feel more like operands than operators.


Is there an online course that covers linear algebra in that way?


Gil Strang taught 18.06, the more calculation-focused and elementary version of linear algebra, when I was at MIT. The approach mentioned by the OP was what was taken in the more theoretical version, 18.701. I think that class has notes and lectures on OCW.


Not an online course, but Linear Algebra by Serge Lang is nice little book that is in this spirit (good to look at if you’re familiar with LA basics).


3blue1brown has great linear algebra videos that take it from the transformations point of view.


Makes inappropriate advances.


There're lots of different GI methods, most of them use spatial acceleration structures such as BVH and KD-Trees after triangulation. Fluids use a grid for discretisation, behind the scenes the PDEs are solved be means of the finite difference methods, it has nothing to do with ray intersection tests.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: