I definitely agree that Apple will optimize their SOC, but IMO their best solution is to just copy Nvidia's homework. Dedicated inferencing hardware is power-hungry and rarely any better than the beefed-up GPU or even a RISC CPU core at a decent clock speed. Integrating AI at the GPU-level lets you get dual-purpose functionality out of the same silicon; it's what Nvidia has built towards since they started shipping CUDA.
> besides the Nvidia bubble, which is late-days
The Nvidia bubble might just be beginning, if we keep relying on phrases like "training agnostic" to pay our dividends.
> besides the Nvidia bubble, which is late-days
The Nvidia bubble might just be beginning, if we keep relying on phrases like "training agnostic" to pay our dividends.