Should note that I used q5 quantisation with llama.cpp: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGU...
Should note that I used q5 quantisation with llama.cpp: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGU...