Then he can wait for the RNG to produce this same random data. Eventually it will produce a file which matches. A dynamic solution of sorts, because he would have to be quick to diff, before the file starts changing again.
I feel that the compressor for a true random stream is a true random generator. If I quickly show you a screen of black-and-white unpredictable noise, and ask you what it was, you'd compress/understand/recall that as "generate_noise()". I do not feel that this is lossy compression, for what did you lose? The ordering of a random file? Random files have no order to lose.
When you're running a reproducible science experiment it's probably a good idea to include the actual data. For random noose this could be the generator and seed and then some good hashes but only if the experimentor used a generator and seed - if the experimentor just grabs noise from somewhere and uses that you want the actual data as part of reproducibility.
He's running a model and needs to use the same random data each time.