Hacker News new | past | comments | ask | show | jobs | submit login

Chips have a 5-7 year lead time. Apple has been shipping neural chips for years while everyone else is still designing their v1.

Apple is ahead of the game for a change getting their chips in line as the software exits alpha and goes mainstream.




But they haven't exposed them to use. They are missing a tremendous opportunity. They have that unique unified memory model on the m1/m2 arms so they have something no other consumer devices have. If they exposed their neural chips they'd solidify their lead. They could sell a lot more hardware.


They are though. Apple released a library to use Apple Silicon for training via PyTorch recently, and has libraries to leverage the NE in CoreML.


> Apple Silicon for training via PyTorch recently

This is just allowing PyTorch to make use of the Apple GPU, assuming the models you want to train aren't written with hard-coded CUDA calls (I've seen many that are like that, since for a long time that was the only game in town)

PyTorch can't use the Neural Engine at all currently

AFAIK Neural Engine is only usable for inference, and only via CoreML (coremltools in Python)


Thank you! I wasn't aware of that. Let me research that. May 2022 announcement. Is this suitable for the the apps like llama.cpp since it's a Python library? It appears to be a library but they didn't document how to use the underlying hardware - but I welcome more info.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: