When working with GGUF what chat templates do you use? Pretty much every gguf I've imported into ollama has given me garbage response. Converting the tokenizer json has yielded mixed results.
For example how do you handle the phi-4 models gguf chat template?
For example how do you handle the phi-4 models gguf chat template?