It's an AMD problem as well. It's absolute nightmare trying to research for a computer today. What ports can you use in what circumstances. Which slots go the CPU directly and which go to a chipset.
Which lanes are disabled if you use nvme-slot 2. Which slot has which generation etc. A proper nightmare.
And while we are at it, dedicating pci-lanes to nvme-slots must be one of the most boneheaded decisions in modern computers. Just use a pci-card with up to four nvme-slots on it instead.
Maybe it’s because I bought a “gaming” motherboard, but the manual was pretty (for my understanding at least) as to what configuration of m.2 drives and PCIe lanes would run at what version, what went to cpu and what went to chipset.
The issue is cross comparison, not reading from the manual of a single board. In the past you could look at the physical motherboard layout and could compare it with another motherboard. Today, it's merely a suggestion. You take the physical layout with a huge grain of salt because it's nothing but lies. Which means going to the website of every single potential motherboard you wish to purchase, downloading their PDF, and digging through enough caveat fine print documentation to make your eyes bleed.
It's not just segmentation. Laptop buyers are not going to pay for 64 lanes. A regular Intel SKU of the 12th generation has 28 PCIe 4.0/5.0 lanes. A Xeon has 64, does not have 5.0, and costs way more, partly because it has 4189 pins on the bottom, which is insane.
> The problem with PCIe is not bandwidth, it is the limit in lanes on consumer PCs... For servers it is a different story
EPYC/Threadripper is awesome but also explicitly not the segment being discussed. But since it was mentioned it's 64 PCIe lanes per socket on Intel (up to 8 sockets/512 lanes) instead of 128 lanes for single socket/up to 160 lanes for dual socket on Epyc. Gonna cost you a damn arm and leg and you better know your NUMA though.
In the consumer segment AMD has 16 PCIe lanes for the graphics and up to 4 PCIe lanes for NVMe drives, the chipset link is and additional x4 PCIe instead of a different interconnect. This comes to 20 PCIe lanes for user devices and 4 lanes worth of PCIe bandwidth off the chipset. https://i.imgur.com/8Aug02l.png
Intel offers something similar, 16 PCIe lanes for graphics and 4 PCIe lanes for NVMe. They opt for a proprietary DMI connection to the chipset which is equivalent to 8 lanes of PCIe 3.0 bandwidth or 4 lanes of PCIe 3.0 bandwith. https://i.pcmag.com/imagery/reviews/070BdprI2Ik2Ecd2wzo0Asi-...
Each offers splitting the x16 for the GPU into 2 x8 as well as oversubscribing the downstream PCIe bandwidth from the chipset, devices through the chipset on each obviously hit different latency penalties than CPU direct lanes as well. In the end the offerings are both pretty much identical in the consumer space.