Hacker News new | past | comments | ask | show | jobs | submit login

It's very exciting in the local LLMs space, where the unified memory allows fast inference of large models.



I read that the bandwidth is 50% lower on the M3 combined with a lower CPU core coun. This reduction will impact inferrence performance. It maybe better to stick with M2 series if you really really think spending $10k on a laptop mac to do inferrence slowly (but faster than a plain PC of course) make sense.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: