Hacker News new | past | comments | ask | show | jobs | submit login

What if you have a Macbook Air with 16GB (the bechmarks dont seem to show memory).



You could definitely run an 8B model on that, and some of those are getting very capable now.

The problem is that often you can't run anything else. I've had trouble running larger models in 64GB when I've had a bunch of Firefox and VS Code tabs open at the same time.


I thought VSCode was supposed to be lightweight, though I suppose with extensions it can add up


8B models with larger contexts, or even 9-14B parameter models quantized.

Qwen2.5 Coder 14B at a 4 bit quantization could run but you will need to be diligent about what else you have in memory at the same time.


I have a M2 Air with 24GB, and have successfully run some 12B models such as mistral-nemo. Had other stuff going as well, but it's best to give it as much of the machine as possible.


I recently upgraded to exactly this machine for exactly this reason, but I haven't taken the leap and installed anything yet. What's your favorite model to run on it?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: