Hacker News new | past | comments | ask | show | jobs | submit login

Apple has Neural Engine and it really speeds up many CoreML models - if most operators are implemented in NPU inference will be significantly faster than on GPU on my Macbook m2 max (and they have similar NPU as those in e.g. iPhone 13). Those ASIC NPU just implements many typical low level operators used in most ML models.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: