Hacker News new | past | comments | ask | show | jobs | submit login

Sure, it happens. How often it happens really depends on so many factors though.

For example, I have this setup where a model has some actions defined in its system prompt that it can output when appropriate to trigger actions, and the interesting bit is that initially I was using openhermes-mistral which is famous for its extreme attention to the system prompt, and it almost never made any mistakes when calling the definitions. Later I swapped it with llama-3 which is way smarter, but isn't tuned to be nearly as attentive and far more often likes to make up alternatives and don't get fuzzy matched properly. Someone anthropomorphizing it might say it lacks discipline.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: