Having not seen benchmarks, I would imagine that claimed memory bandwidth of ~800 GB/s vs Threadripper's claimed ~166 GB/s would make a significant difference for a number of real-world workloads.
Someone will probably chime in and correct me (such is the way of the internet - Cunningham's Law in action) but I don't think the CPU itself can access all 800 GB/s? I think someone in one of the previous M1 Pro/Max threads mentioned that several of the memory channels on Pro/Max are dedicated for the GPU. So you can't just get a 800 GB/s postgres server here.
You could still write OpenCL kernels of course. Doesn't mean you can't use it, but not sure if it's all just accessible to CPU-side code.
(or maybe it is? it's still a damn fast piece of hardware either way)
On an M1 Max MacBook Pro the CPU (8P+2E) cores peak at a combined ~240GB/s the rest of the advertised 400GB/s memory bandwidth is only useable by the other bus masters e.g. GPU, NPU, video encoding/decoding etc.
So now the follow-on question I really wanted to ask: if the CPU can't access all the memory channels does that mean it can only address a fraction of the total memory as CPU memory? Or is it a situation where all the channels go into a controller/bus, but the CPU link out of the controller is only wide enough to handle a fraction of the bandwidth?
It's more akin to how on Intel, each core's L2 has some maximum bandwidth to LLC, and can't individually saturate the total bandwidth available on the ring bus. But Intel doesn't have the LLC <-> RAM bandwidth for that to be generally noticeable.
Linking this[1] because TIL that the memory bandwidth number is more about the SoC as a whole. The discussion in the article is interesting because they are actively trying to saturate the memory bandwidth. Maybe the huge bandwidth is a relevant factor for the real-world uses of a machine called "Studio" that retails for over $3,000, but not as much for people running postgres?