> Unfortunately, the laws of physics driving DRAM cells have not improved much over the last couple of years (or decades, for that matter), so memory chips still must operate with similar absolute latencies, driving up the relative CAS latency. In this case 14ns remains the gold standard, with CAS latencies at the new speeds being set to hold absolute latencies around that mark.
Some gaming memory kits can do 10ns or less latency. Though I guess if memory latency is your bottleneck, you should look at HBM.
The smallest transfer done from memory is a single cache line, which on most desktop machines is 64 bytes, or 512 bits. You could imagine a memory bus that was 512 bits wide and transferred a cache line per clock, and this would improve latency when compared to a serial bus with higher clock speed. HBM doesn't do that, though, instead every HBM3 module has 16 individual 64-bit channels, with 8n prefetch (that is, when you send a single request to a single channel, it will respond with 512 bits over 8 cycles).
That's CAS latency. To calculate the latency of a timing you divide the timing itself by the clock frequency of your sticks. For example, DDR4-4000 CL14 is running at 2000MHz = 2GHz, so the CAS latency is 14/2 = 7ns.
Just to make sure I understand: you're saying that checking L1/L2/L3 takes around 35ns, and then the CPU accesses DRAM which takes 10ns? If that's so, how is L3 cache any faster than DRAM? Also, can you explain why the memory controller adds some latency?
An L3 hit only takes ~15 ns so that means another 15-20 ns is spent traversing the fabric and memory controller. I'm not sure what all is involved there but for Intel it has to go around the ring and for AMD it has to cross chiplets.
Interesting. If an L3 hit takes 15 ns, then based on your argument a hypothetical CPU with only one core (and hence no fabric) would be better off without L3, since a DRAM read can be performed in just 10 ns.
You still need a memory controller, you still need to get to that controller on the edge of the die. And going to RAM more often will surely consume more power.
This is the part I don't understand. You're saying that the interval from when the DRAM first receives a read request to when it sends the data back over the channel is about 10ns, at least in fancy gaming RAM. Ok, fine. Where is the other 10-20 ns of latency coming from? Why can't the CPU begin using the data as soon as it arrives? I guess some time is needed to move the data from the memory controller to the actual CPU core. But it seems to me (far from an expert) that this shouldn't take a full 10-20 ns. Or am I mistaken?
Firstly, to clarify, there's nothing very special about 'gaming ram' other than the particular chunk of silicon performs better than others so they stuck a shiny sticker and an oversized heatsink on.
The problem here is the latency is state dependent and who knows what people are talking about here. The memory itself can have a latency 1-3x the CAS Latency number and you need to understand how DRAM is accessed to appreciate why. Which will also clarify why an L3 cache is such a good idea.
> For a completely unknown memory access (AKA Random access), the relevant latency is the time to close any open row, plus the time to open the desired row, followed by the CAS latency to read data from it.
Then you've got some small time going to and from the controller, which might also be doing some address translation, maybe some access reordering to avoid switching rows. I think 30ns is very optimistic.
Some gaming memory kits can do 10ns or less latency. Though I guess if memory latency is your bottleneck, you should look at HBM.