I've been considering an AI project for consuming a conda build recipe and digging into the codebase to extract extra info about the project and make it into a nix flake--which would be a bit more stable. I figure you could test for equivalence in a few ways and feed the outputs of that test back into the model. Hopefully there's enough context in the conda recipe, the project codebase, and whatever errors pop out to get some of them converted with minimal handholding.
Because regardless of what the cool kids are doing, important work is being done in conda, and usually by people whose expertise isn't software packaging.