seems great for my reinforcement learning models. instead of parsing my hyperparameters through the tensorflow cli API and editing the training file a line at a time to take in an additional hyperparameter, I can just directly set them through the cli with fire.
but then how will you keep track of which parameters worked well? I've been essentially storing my kwargs in json and not felt a need to control anything directly from the CLI.
Good question, for me I programatically generate a folder with the hyperparameter key - values in the name and store the checkpoints under it. As you can imagine, it can get out of control quickly if not managed well, but it works for 1-3 person projects. For anything more large scale or organized, I would recommend looking into Comet ML which lets you query and filter your experiments by hyperparameter ranges instead of manually looking at folder names.
I did that until I hit the Linux directory name length limit, lol. Now I hash the hyperparameters dict to get the directory name, and store a json file within. Totally ad hoc and I'm sure a better solution exists.