No, that's controlled by the server: try lspci -vv on any linux system. Look at the link speed and width, like LnkSta: Speed 8GT/s, Width x2: x2 means 2 lanes.
Besides the speed, you can have another problem with lanes limitations.
For example, AMD CPUs have a lot of lanes, but unless you have an EPYC, most of them are not exposed, so the PCH tries to spread its meager set among the devices connected to your PCI bus, and if you have a x16 GPU, but also a WIFI adapter, a WWAN card and a few identical NVMe, you may find only of the NVMe benchmarks at the throughput you expect.
> For example, AMD CPUs have a lot of lanes, but unless you have an EPYC, most of them are not exposed, so the PCH tries to spread its meager set among the devices connected to your PCI bus, and if you have a x16 GPU, but also a WIFI adapter, a WWAN card and a few identical NVMe, you may find only of the NVMe benchmarks at the throughput you expect.
Most AM4 boards put an x16 slot direct to the CPU, and an x4 direct linked NVMe slot. That's 20 of the 24 lanes; the other 4 lanes go to the chipset, which all the rest of the peripherals are behind. (There's some USB and other I/O from the cpu, too). AM5 CPUs added another 4 lanes, which is usually a second cpu x4 slot.
Early AM4 boards might not have a cpu x4 NVMe slot, and those 4 cpu lanes might not be exposed, and the a300/x300 chipsetless boards don't tend to expose everything, but where else are you seeing AMD boards where all the CPU lanes aren't exposed?
> Early AM4 boards might not have a cpu x4 NVMe slot, and those 4 cpu lanes might not be exposed, and the a300/x300 chipsetless boards don't tend to expose everything
I'm sorry, I oversimplified, and said "most of them" while I should have said "not all of them" as 20/24 is more correct for B550 chipsets (the most common for AM4) instead of trying to generalize.
I'm still not quite sure what you're trying to say?
Lanes behind the chipset are multiplexed, and you can't get more than x4 throughput through the chipset (and the link speed between the cpu and the chipset varies depending on the chipset and cpu). But that's not a problem of the CPU lanes not being exposed, it's a problem of "not enough lanes" or more likely, lanes not arranged how you'd like. On AM4, if your GPU uses x16, and one NVMe uses x4, then everything else is going to be squeezed through the chipset. On AM5, you usually get two x4 NVMe slots, but again everything else is squeezed through the chipset; x670 is particularly constrained because it just puts a second chipset downstream of the first chipset, so you're just adding more stuff to squeeze through the same x4 link to the CPU.
Personally, I found that link to be more confusing than just reading through the descriptions on wikipedia for a particular Zen version. For example https://en.wikipedia.org/wiki/Zen_3 ... just text search in the page for "lanes" and it explains for all the flavors of chips how many lanes, and how many go to the chipset. Similarly the page for AMD chipsets is pretty succinct https://en.wikipedia.org/wiki/List_of_AMD_chipsets#AM5_chips...
There's a reason why so many motherboard makers avoid putting a block diagram in their manuals and go for paragraphs of legalese instead, and laziness is only half of it.
> Most AM4 boards put an x16 slot direct to the CPU, and an x4 direct linked NVMe slot. That's 20 of the 24 lanes; the other 4 lanes go to the chipset, which all the rest of the peripherals are behind. (There's some USB and other I/O from the cpu, too). AM5 CPUs added another 4 lanes, which is usually a second cpu x4 slot.
> For example, AMD CPUs have a lot of lanes, but unless you have an EPYC, most of them are not exposed, so the PCH tries to spread its meager set among the devices connected to your PCI bus, and if you have a x16 GPU, but also a WIFI adapter, a WWAN card and a few identical NVMe, you may find only of the NVMe benchmarks at the throughput you expect.
example from my X670E board
* first NVME = 4x gen 5
* second= 4x gen 4
* 2 USB ports connected to CPU (10/5 Gbit)
and EVERYTHING ELSE goes thru 4x gen 4 PCIE bus, including additional 3x nvme, 7 SATA ports, a bunch of USBs, few 1x PCIE ports, network, etc.
Try:
`sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|downgrad" |grep -B1 downgrad`
Besides the speed, you can have another problem with lanes limitations.
For example, AMD CPUs have a lot of lanes, but unless you have an EPYC, most of them are not exposed, so the PCH tries to spread its meager set among the devices connected to your PCI bus, and if you have a x16 GPU, but also a WIFI adapter, a WWAN card and a few identical NVMe, you may find only of the NVMe benchmarks at the throughput you expect.