Hacker News new | past | comments | ask | show | jobs | submit login

Hasn't no one tried to approach this with AI?

I'd figure it wouldn't be too hard to record couple of hundred of hours of different kinds of fires burning, maybe "re-encode" / transform all the frames from pixel data to a more convenient representation like motion vectors or something and turn that into some sort of model.

Maybe a ControlNet- or motion model to use with Stable Diffusion? Or some kind of proprietary output model which could be used with video editing software / motion capture?

Usually the hardest part with AI seems to be coming up with the data to train with. In this case, that should be the least of one's problems.

You could even record fires in front of a green screen and, hell, setup it like so that you can control variables like wind direction with fan(s), intensity of the fire, temperature of the fire,...

I might even be possible to train the model "in reverse", in such a way the the goal for the AI is to come up with correct set of motion vectors, when given wind direction and speed, intensity and temperature as inputs. Since you can control these variables and record the real result, you have the ground truth for something like reinforcement learning.

So the question is; am I just overly optimistic and dumb, or isn't this a relatively easy thing to do?




Pixar/Disney used AI to enhance their latest Elemental characters. They did a technical presentation 2 months ago month at Siggraph: https://dl.acm.org/doi/abs/10.1145/3587421.3595467

There are videos showing the results with/without the AI detail pass.


You probably need training data of the “superficially looks like realistic fire but the details are wrong” kind, or otherwise you’ll get the “but the fingers are all weird” equivalent of image generation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: