This has been a problem with AMD for as long as I can remember. I remember about 10 years ago when AMD made a strong push against Intel and they traded memory latency for larger LL cache sizes and a directory based cache architecture where the better utilization was supposed to make up for the smaller L1/2 sizes.
Didn't work. Intel smoked them, especially on server workloads that were cache optimized. I wonder if the same things is going to repeat itself?
I haven't build a desktop in almost a decade, so I was thinking of it when the top shelf ryzen chips hit the market. I don't play any games, but would use it for server dev and basic ML work.
Cache optimized means that either the processor is able to prefetch the data before it is needed or it is already in cache. It's exactly this situation in which memory doesn't matter at all. Workloads like web servers or databases that run on servers are generally not cache optimized at all. Your Java, python or php program is going to use a lot of pointers which will incur memory accesses. So yes Intel cpus would be better at this but calling these workloads cache optimized is completely wrong.
I'm taking specifically about the workloads I was interested in at the time where cache optimised means it took pains take advantage of larger L1s and took pains to get L1 hits. But this was a general problem too and noted at the time by many people.
AMDs smaller L1 was as definite negative at the time. This was back when hyperthreading could be a net negative because of the reduced L1 cache per thread so we would turn that off to.
Didn't work. Intel smoked them, especially on server workloads that were cache optimized. I wonder if the same things is going to repeat itself?
I haven't build a desktop in almost a decade, so I was thinking of it when the top shelf ryzen chips hit the market. I don't play any games, but would use it for server dev and basic ML work.