Apple has a point tho: what PC laptop or what typical PC for that matter (and that price) can give you 96 GB of VRAM?
If you could run ML models properly on that machine, it would be pretty nice for inference on larger models.
Now, of course, Apple being Apple they expect you to hand-code ML algos in Swift (lol. lmao.) but still, 96GB of VRAM is 96 GB of VRAM.
Edit: Just to clarify, I understand that this has 400 gbs of bandwidth, not 3,2 tbs like an Nvidia Accelerator with the same size. But the latter costs tens of thousands and requires, well, a whole datacenter probably.
This allows you to run some GPT-X or Diffusion model on your laptop. In theory.
If you could run ML models properly on that machine, it would be pretty nice for inference on larger models. Now, of course, Apple being Apple they expect you to hand-code ML algos in Swift (lol. lmao.) but still, 96GB of VRAM is 96 GB of VRAM.
Edit: Just to clarify, I understand that this has 400 gbs of bandwidth, not 3,2 tbs like an Nvidia Accelerator with the same size. But the latter costs tens of thousands and requires, well, a whole datacenter probably. This allows you to run some GPT-X or Diffusion model on your laptop. In theory.