Synchronous DRAM (SDR and DDRx) still has a RAS wire (row address strobe) and CAS wire (column address strobe), but these are accessed synchronously - and also used to encode other commands.
DDR DRAM is organized into banks (for DDR4/5 these are grouped in bank groups), rows, and columns. A bank has a dedicated set of read/write circuits for its memory array, but rows and columns within a bank share a lot of circuitry. The read/write circuits are very slow and need to be charged up to perform an access, and each bank can only have one row open at a time. The I/O circuits are narrower than the memory array, which is what makes the columns out of the rows.
The best-case scenario is that a bank is precharged and the relevant row is open, so you just issue a read or write command (generally a "read/write with auto-precharge" to get ready for the next command) and the time from when that command is issued to when the data bus starts the transaction is the CAS latency. If you add in the burst size (which is 4 cycles for a double-data-rate burst length of 8) plus the signal propagation RTT to the memory, you get your best-case access latency.
The worst-case scenario for DDR is then that you have just issued a command to a bank and you need to read/write a different row on that bank. To do that, you need to wait out the bank precharge time and the row activation time, and then issue your read or write command. This adds a lot of waiting where the bank is effectively idle. Because of that wait time, memory controllers are very aggressive about reordering things to minimize the number of row activations, so you may find your access waiting in a queue for several other accesses, too.
Also, processors generally use a hash function to map logical addresses to physical DRAM banks, rows, and columns, but they set it up to optimize sequential access: address bits roughly map to (from least to most significant): memory channel -> bank -> column -> row. It is more complicated than that in reality, but that's not a bad way to think about it.
DDR DRAM is organized into banks (for DDR4/5 these are grouped in bank groups), rows, and columns. A bank has a dedicated set of read/write circuits for its memory array, but rows and columns within a bank share a lot of circuitry. The read/write circuits are very slow and need to be charged up to perform an access, and each bank can only have one row open at a time. The I/O circuits are narrower than the memory array, which is what makes the columns out of the rows.
The best-case scenario is that a bank is precharged and the relevant row is open, so you just issue a read or write command (generally a "read/write with auto-precharge" to get ready for the next command) and the time from when that command is issued to when the data bus starts the transaction is the CAS latency. If you add in the burst size (which is 4 cycles for a double-data-rate burst length of 8) plus the signal propagation RTT to the memory, you get your best-case access latency.
The worst-case scenario for DDR is then that you have just issued a command to a bank and you need to read/write a different row on that bank. To do that, you need to wait out the bank precharge time and the row activation time, and then issue your read or write command. This adds a lot of waiting where the bank is effectively idle. Because of that wait time, memory controllers are very aggressive about reordering things to minimize the number of row activations, so you may find your access waiting in a queue for several other accesses, too.
Also, processors generally use a hash function to map logical addresses to physical DRAM banks, rows, and columns, but they set it up to optimize sequential access: address bits roughly map to (from least to most significant): memory channel -> bank -> column -> row. It is more complicated than that in reality, but that's not a bad way to think about it.