Hacker News new | past | comments | ask | show | jobs | submit login

Wow, didn't this happen with Intel? I think that was a noticeable drop in performance.

This is probably worse given people were trying to experiment with local LLMs on CPU. Its not like they even offer Nvidia.




Macs have GPUs and their architecture means that GPU has access to the full system RAM. Cuda isn’t a requirement for running ML workloads on a GPU.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: