Distance and signal integrity would be my guess. The longer your transmit the signal, the more complicated and expensive it is to maintain signal integrity. There are out of spec USB cables that manage to get longer distance by converting to fiber optic (meaning it’s an active cable) and even then it’s extremely expensive for not that much cable.
Intense preemphasis and equalization are table stakes for high speed serial these days. Have been since PCIe3. By Gen5, the fundamental lobe goes to 32GHz, and the mandates are steep. It has to be able to fish the signal up after the dielectrics have eaten all but a few parts per thousand of the high frequency energy! Compared to modern transceivers -- these new ones or even, if you'll indulge my speculation, the transceivers in your own computer that are going unused by the dozen -- compared to those, dual simplex 10Gb/s just looks... old. The price doesn't really match up, and I wonder if it's a business thing or if there is an unfortunate design choice that baked in some manufacturing difficulty.
I don’t know anything about what you just said as I don’t have that level of digital/analog design knowledge to know how to respond. All I know is that USB 3.0 cable lengths is 18 meters but to get there the cable becomes incredibly thick, heavy and inflexible. ePCIe cables are measured in cm. Ethernet can go up to 100m at 10GBe or more if you’re willing to sacrifice some speed (or faster if you sacrifice some cable length)
Could switching to PCIe instead of Ethernet for the digital signaling have better performance? I don’t know but I doubt it. Ethernet is hitting 100gbps and higher and I suspect the challenge is designing cabling that can hit just as much as the analog design piece. Not to mention that USB3.2 and Ethernet have very similar speeds, just Ethernet can manage it over much longer distances at significantly lower cost. I’m skeptical PCIe has some kind of magic bullet here. These protocols are optimized for totally different use cases.