You want to control certain aspects of the output, and only leave the rest up to the GAI. The issue is that AI models don’t have a reliable mechanism for doing so.
That's not a fundamental limitation of the models, even if it's present in the products running on those models — if you want to populate a database from an LLM, you can constrain the output at each step to be only from the subset of tokens which would be valid at that point.