Hacker News new | past | comments | ask | show | jobs | submit login

in llama.cpp inference runs on CPU, using AVX-2 optimizations. You don't need GPU at all

It runs on my 2015 ThinkPad!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: