Hacker News new | past | comments | ask | show | jobs | submit login

> The problem with PCIe is not bandwidth, it is the limit in lanes on consumer PCs... For servers it is a different story

EPYC/Threadripper is awesome but also explicitly not the segment being discussed. But since it was mentioned it's 64 PCIe lanes per socket on Intel (up to 8 sockets/512 lanes) instead of 128 lanes for single socket/up to 160 lanes for dual socket on Epyc. Gonna cost you a damn arm and leg and you better know your NUMA though.

In the consumer segment AMD has 16 PCIe lanes for the graphics and up to 4 PCIe lanes for NVMe drives, the chipset link is and additional x4 PCIe instead of a different interconnect. This comes to 20 PCIe lanes for user devices and 4 lanes worth of PCIe bandwidth off the chipset. https://i.imgur.com/8Aug02l.png

Intel offers something similar, 16 PCIe lanes for graphics and 4 PCIe lanes for NVMe. They opt for a proprietary DMI connection to the chipset which is equivalent to 8 lanes of PCIe 3.0 bandwidth or 4 lanes of PCIe 3.0 bandwith. https://i.pcmag.com/imagery/reviews/070BdprI2Ik2Ecd2wzo0Asi-...

Each offers splitting the x16 for the GPU into 2 x8 as well as oversubscribing the downstream PCIe bandwidth from the chipset, devices through the chipset on each obviously hit different latency penalties than CPU direct lanes as well. In the end the offerings are both pretty much identical in the consumer space.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: