Hacker News new | past | comments | ask | show | jobs | submit login

You want to control certain aspects of the output, and only leave the rest up to the GAI. The issue is that AI models don’t have a reliable mechanism for doing so.



That's not a fundamental limitation of the models, even if it's present in the products running on those models — if you want to populate a database from an LLM, you can constrain the output at each step to be only from the subset of tokens which would be valid at that point.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: