I'd figure it wouldn't be too hard to record couple of hundred of hours of different kinds of fires burning, maybe "re-encode" / transform all the frames from pixel data to a more convenient representation like motion vectors or something and turn that into some sort of model.
Maybe a ControlNet- or motion model to use with Stable Diffusion? Or some kind of proprietary output model which could be used with video editing software / motion capture?
Usually the hardest part with AI seems to be coming up with the data to train with. In this case, that should be the least of one's problems.
You could even record fires in front of a green screen and, hell, setup it like so that you can control variables like wind direction with fan(s), intensity of the fire, temperature of the fire,...
I might even be possible to train the model "in reverse", in such a way the the goal for the AI is to come up with correct set of motion vectors, when given wind direction and speed, intensity and temperature as inputs. Since you can control these variables and record the real result, you have the ground truth for something like reinforcement learning.
So the question is; am I just overly optimistic and dumb, or isn't this a relatively easy thing to do?
You probably need training data of the “superficially looks like realistic fire but the details are wrong” kind, or otherwise you’ll get the “but the fingers are all weird” equivalent of image generation.
I'd figure it wouldn't be too hard to record couple of hundred of hours of different kinds of fires burning, maybe "re-encode" / transform all the frames from pixel data to a more convenient representation like motion vectors or something and turn that into some sort of model.
Maybe a ControlNet- or motion model to use with Stable Diffusion? Or some kind of proprietary output model which could be used with video editing software / motion capture?
Usually the hardest part with AI seems to be coming up with the data to train with. In this case, that should be the least of one's problems.
You could even record fires in front of a green screen and, hell, setup it like so that you can control variables like wind direction with fan(s), intensity of the fire, temperature of the fire,...
I might even be possible to train the model "in reverse", in such a way the the goal for the AI is to come up with correct set of motion vectors, when given wind direction and speed, intensity and temperature as inputs. Since you can control these variables and record the real result, you have the ground truth for something like reinforcement learning.
So the question is; am I just overly optimistic and dumb, or isn't this a relatively easy thing to do?