The term gigatransfers per second is essentially Gb/s, so 2 bits x 32 billion per second is 64 billion per second. This is done using PAM4, which stands for Pulse Amplitude Modulation, and 4 is the number of valid voltage levels. Normally we use what's called NRZ or Non Return to Zero, where there are only two voltage levels: 0 and 1.
The did pull something cheeky though - the 256 GB/s is for a PCIe x32 connection. It exists in the spec, but I've literally never seen it in the wild.
> The term gigatransfers per second is essentially Gb/s
It's not really that either because PCIe 2.0 was "5GT/s" despite being only 4 gigabits per second.
You could make some kind of argument about pre- and post-encoding bits, but that still falls down in other circumstances. Gigabit ethernet has five voltage levels per lane, 125 milllion times a second. What's its MT/s if the answer isn't 125?
> It's not really that either because PCIe 2.0 was "5GT/s" despite being only 4 gigabits per second.
Sort of, depends on where you're measuring. PCIe 2.0 runs at 5GT/s, which is the speed that the Phy layer runs at, but it uses an 8b/10b encoding, so the Data Link layer sees that 4GB/s. For Gen 3.0 and 4.0 (and I think 5.0) the Phy layer uses a 128b/130b encoding, which has much less overhead. So technically it's about 63Gb/s but we round up. But that's kind of a useless measurement because there's additional overhead from Acks/Nacks on the Data Link Layer and TLP headers on the transaction layer (which depends on the size of your TLPs, and whether or not you're using infinite credits.)
edit: I'm not sure which encoding the Phy layer on PCIe Gen 6.0 uses, since it's PAM4. The (approximately) 63 GB/s on the data link layer assumes 128b/130b encoding
> What makes you say that? The chart says x16.
Maybe that metric includes both directions. Either way, it's misleading, as you only get 128 MB/s (before protocol overhead) on PCIe 6.0 x16. (64Gb * 16 lanes) / 8 bits per byte = 128 MB/s
They're definitely considering both directions for the total bandwidth number.
But I don't think that's what's happening with the gigatransfers. When they launched 5.0, they were clearly counting one differential pair of pins: "Delivers 32 GT/s raw bit rate and up to 128 GB/s via x16 configuration"
But my point is, even though they were already doing that, they were claiming 32GT/s last generation. But it's 64GT/s now. I don't think that's counting more pins, I think they're saying that each transfer is two transfers.
6.0 transmits 2 bits at a time per lane, 32 billion times a second.
So why are they claiming it's 64 gigatransfers per second? Do I misunderstand the term, or are they trying to pull something cheeky?