Hacker News new | past | comments | ask | show | jobs | submit login

Probably similar token rates out of the box, although I havent done a straight comparison. Where they'll differ is in the sorts of questions they're good at. Llama2 was trained (broadly speaking) for knowledge, Phi-2 for reasoning. And bear in mind that you can quantise phi-2 down too. The starting point is f16.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: