But they haven't exposed them to use. They are missing a tremendous opportunity. They have that unique unified memory model on the m1/m2 arms so they have something no other consumer devices have. If they exposed their neural chips they'd solidify their lead. They could sell a lot more hardware.
This is just allowing PyTorch to make use of the Apple GPU, assuming the models you want to train aren't written with hard-coded CUDA calls (I've seen many that are like that, since for a long time that was the only game in town)
PyTorch can't use the Neural Engine at all currently
AFAIK Neural Engine is only usable for inference, and only via CoreML (coremltools in Python)
Thank you! I wasn't aware of that. Let me research that. May 2022 announcement. Is this suitable for the the apps like llama.cpp since it's a Python library? It appears to be a library but they didn't document how to use the underlying hardware - but I welcome more info.
Apple is ahead of the game for a change getting their chips in line as the software exits alpha and goes mainstream.