Hacker News new | past | comments | ask | show | jobs | submit login
The death of the PCIe expansion card (kerricklong.com)
225 points by Kerrick on Sept 6, 2022 | hide | past | favorite | 241 comments



One related issue is the death of integrated PCIe switches on motherboards. High-end boards used to have PCIe hubs on them so you could plugin in a lot of cards and still get all the bandwidth out of a 16-lane CPU.

The reason why they don't do this is because PLX got bought out by Broadcom and they jacked up the price of the chips. It turns out their most lucrative market was storage, because datacenters want to connect a lot of high-speed NVMe to one machine and they will pay buckets for a chip that will let them all share bandwidth.

So now consumers that want a lot of devices need a CPU with a lot of lanes, which meant buying into a more expensive high-end desktop platform with more than the standard of 16 or 20 lanes on it. Except these are also becoming scarce; both Intel and AMD's high-end desktop platforms haven't been updated in years. The way high-end used to work was that they were just last-gen server chips with overclocking and more consumer-oriented boards, so they got the extra lanes and memory capacity, but you still got the gobs of USB and so on that you'd see on a consumer board.

So with no lane expanders on motherboards anymore, and no high-end platforms with higher lane count CPUs in them, the only other option for buying something with lots of slots in it is to buy server gear. Not only is that more expensive, but you also lose out on all the consumer-grade creature comforts you expect[0] and have to deal with the added noise of small fans in a rackmount case.

[0] VGA is considered standard video output on most server motherboards


I'm running a server motherboard (Supermicro H12DSi-N6), and while the processing power is wonderful, the tradeoffs were a bit surprising:

* I can't disable the BMC video card, which means that when I'm running Windows, there's a 800x600 monitor that will never go away and windows occasionally decide to open on it. This is solvable by opening the BMC on the machine itself and using the IKVM webapp to drag it onto my main screen; this takes a very careful flick of the mouse. * There's no built-in audio; in order to listen to Spotify, I use a USB audio card. * The board only has 6 USB ports, only 2 of which are USB-3. Thus, I get to use a lot of hubs. * It takes a long time to boot past the firmware; generally 3-4 minutes to warm boot and 6-8 cold. * While it can be used as a single-socket board (which I do, because I don't have the funds for a second CPU just yet), I lose a bunch of PCIe slots doing so; half the PCIe lanes for the first CPU are direct wired to the second socket for the inter-CPU link.


Bridge jumper JPVGA1 directly on the motherboard and it will disable the onboard VGA. Switch BMC priority from onboard (default) to offboard, I don't think this is necessary if you bridge JPVGA1.

Get yourself a nice Highpoint USB 3.0 card with quad controllers on each port. 20Gbps of bandwidth in aggregate.

I believe there is a jumper on the motherboard that will bridge the missing PCIe slots in single CPU configuration.


If the 800x600 is on the right and a window opens in it, you can alt-tab to it or click it on the taskbar, then just hit win+leftarrow until it appears on the correct monitor. Same for win+rightarrow if other way around. Should not need to IKVM


On all Windows operating systems I have used, this workaround (winkey+left/right) unfortunately does not work for moving modal pop-ups.

For example, when starting KeePass by opening a .kdbx password safe file, KeePass will first open the master password input modal pop-up.

If that happens on a turned-off screen (IMHO it opens on the screen on which the KeePass main window was placed before the process exited on the previous run), you'll be lost.

Very frustrating, e.g. for multi-screen home theater setups.

Anyone please share your solutions to this problem.


Win + Shift + arrow works better with modals because it doesn't try and resize/pin the window but just blips it to the next screen over preserving size and location. I use it a lot when I move from my laptop to a multi-mon setup to sort windows to different screens quickly.


Moving a window: Alt-Space, M, arrow keys


If you Alt-Space, M, and press one of the arrow keys, the windows will 'stick' to your mouse cursor so you can just wiggle your mouse after they key combo to bring it to you.


The old CUA, Windows 3.0 corner square has an oblong rectangle that represents the space bar (at least in my mind). Unless the window is a child window, in which case the shortcut is Alt-Hyphen[1], and I haven't used that shortcut in many years. The fashion for tabs has probably made that useless.

[1] http://www.robelle.com/smugbook/win3x.html


I was inspired by nostalgia to do the research via archive.org: This shortcut goes back to Windows 2.0, almost 35 years ago. I remembered it from the 3.1 days.


If that's the only other monitor, you can set your actual display as primary then select primary display only in Win+P menu


| * It takes a long time to boot past the firmware; generally 3-4 minutes to warm boot and 6-8 cold.

^^ why is that? is it just the firmware being buggy, or is there any legit reasons why they need to self-check _for minutes_?


Server hardware operates remote and headless, so it performs much more exhaustive memory checks and will try to proactively report failures. Good datacenters will catch these and open a ticket to replace failing hardware.

Beyond that, lots of servers will then enumerate all of the network interfaces and try to initiate a net boot, and only then fall back to the hard drive. Some tweaking in the bios can usually disable these features and gain a bunch of time, but again in a headless remote context this default behavior is incredibly useful for recovery purposes. We like to avoid actually running out with a physical crash cart if we can help it :)


In case helpful, select the app on the task bar and use Shift+Win+Left/Right arrow key to move it between monitors without needing to do the IKVM dance


You could likely use autohotkey and winmove to place the modal windows where you want, I'm not at my desk now, but I'll update this post with the script I use (or a reply to it).


Can you disable the video driver in the system control panel if you don't want Windows to use it?

Other stuff seems normal for a server board though.


Damn, respect, your mainboard alone is probably worth more than my entire workstation.


> One related issue is the death of integrated PCIe switches on motherboards. High-end boards used to have PCIe hubs on them so you could plugin in a lot of cards and still get all the bandwidth out of a 16-lane CPU.

PCIe switches weren't common or necessary for most consumer applications. Workstation and server CPUs have plenty of lanes on their own. PCIe switches occupied a relatively narrow use case in the middle where people wanted to use consumer CPUs but attach a lot of add-in cards and share bandwidth across them.

Even that use case has been eroded a bit with PCIe splitting options, where an x16 lane can be split into x8/x8 or even x4/x4/x4/x4 depending on the motherboard. An x4 PCIe 4.0 lane is as fast as an old x16 PCIe 2.0 lane, so that's a significant amount of bandwidth still. PCIe 5.0 takes this even further, where an x16 PCIe 5.0 slot has over 60GB/sec of bandwidth, which is a lot even when divided into x4 or x2 channels.

AMD especially made platforms with a lot of PCIe lanes relatively affordable. An AMD workstation motherboard and CPU with a lot of PCIe lanes might not actually be that much more expensive than a hypothetical consumer CPU with a PCIe switch. The other issue is that with a PCIe switch is that you're still limited to upstream bandwidth, so if you need x16 bandwidth running to multiple cards at the same time, a PCIe switch doesn't actually help you.


It's still a problem if you want to run previous gen hardware. For instance, if I have a Gen4 motherboard and pass through a second GPU to a VM - so the second GPU at Gen3 x8 plus the host GPU at Gen4 x8 - I'm only using a quarter instead of half the throughput the hardware actually supports, because Gen3 is half the speed of Gen4 per lane.

Or even think about Gen5, doubling Gen4 speeds: With a PCIe switch, I could get essentially 3/4 performance out of a Gen5 host GPU, and full performance out of a Gen3 second, instead of 1/2 and 1/4 respectively.


Do GPUs even saturate PCIe3 x16 or x8 connections, let alone PCIe4 or 5, that we need to be worried about bottlenecks?

The only thing I'm aware of that will actually saturate a PCIe connection is NVME storage.


For gaming, not even close. I've often had a GPU monitor running, charted bus utilization...and it's barely a few percent no matter what game I'm running, except for a very brief spike during texture and model load-in at the start of a round/level.


Specifically for VM's, something like Looking Glass (shared framebuffer so you don't need a dedicated monitor) needs the extra bandwidth to get the video signal through without tanking performance, especially at higher resolutions and refresh rates.

Aside from that, I agree it's not too big an issue for 95% of computer users, but it's something to keep in mind.

[1]https://looking-glass.io/


To quantify this: a 4k 60Hz uncompressed video stream needs approximately one lane of PCIe gen4 (though only in one direction, while PCIe has the bandwidth in both directions). So any realistically obtainable resolution and refresh rate will not be a problem for sharing the framebuffer on a GPU with PCIe gen4 x16, but lower-end GPUs often have just x8 and there are even some mobile GPUs with x4 being packaged onto desktop graphics cards now.


If I remember correctly, running a 3080 10GB in PCIe 4.0 yields >5% performance average in gaming benchmarks compared to running it in PCIe 3.0.

You could probably argue that you should just settle and not take the 5%, but when you are buying in at such a price point, you are likely concerned with 5% differences. There are also higher end SKUs such as the 3090Ti where that performance gap is likely larger.


I've had significant frame rate drops with PCIe3 x8 compared to PCIe3 x16 (Easy 20 - 40% drop, depending on the game)


This is precisely why I am not going to buy into this year's Ryzen motherboards, how am I supposed to attach all of my PCIe 2.0 LSI HBA as well as my 2 GPUs, and NVMe? My options are buying 2 machines (one for storage, one for compute), sacrificing speed somewhere, or buying into either Threadripper Pro OEM or EPYC ($$$).

The bandwidth is there by an order of magnitude, but not the lanes to actually use it.


>PCIe switches weren't common or necessary for most consumer applications. Workstation and server CPUs have plenty of lanes on their own. PCIe switches occupied a relatively narrow use case in the middle where people wanted to use consumer CPUs but attach a lot of add-in cards and share bandwidth across them.

PCIe switches still have a place in storage servers. I can build a backplane with a PCIe switch so I can get a ton of ports/slots for HDDs or SSDs and not have to buy a much more expensive CPU with the IO to handle everything. The trade off is that you are over subscribed and all disks will never reach full bandwidth simultaneously.


It's pain that even PCIe splitting isn't adopted well on recent Z690/X670 mobo. So they just provide us a PCIe 5.0 x16 slot that is unnecessary great but practically nothing better than 3.0 x16 for gaming (and most workload) now . I'd like to have PCIe 4.0 x8/x8 rather than that.


PCIe bifurcation is a function of what your CPU and/or PCH can do. There are hard limits in the silicon as to how you can divvy up your lanes.


Splitting CPU lane to x8/x8 is supported at least. Just mobo manufacturers don't want to implement it as a dedicated slot. It seems that Alder Lake no longer support x8/x4/x4 sadly.


> It seems that Alder Lake no longer support x8/x4/x4 sadly.

I would be very surprised if the silicon doesn't still support 8+4+4 operation. I suspect the lack of available motherboards with this config comes down to the expense of getting a PCIe gen5 signal to reach the third slot. It may require both the extra mux and a redriver or retimer.


I found this from quick Googling https://forum-en.msi.com/index.php?threads/does-pro-z690-a-a...

Perhaps why they dropped the support is because the reason you wrote? Still I wonder is it possible to have Gen5 x8 and Gen3 x4/x4.


> It seems that Alder Lake no longer support x8/x4/x4 sadly.

Sort of. support is "Up to 1x16+4, 2x8+4" [1] so could be used as x8/x4/x4, the remaining 4 lanes are unused in the x8, as there are 20 PCIe lanes on the CPU.

[1] https://ark.intel.com/content/www/us/en/ark/products/228441/...


Ah yes, but everyone (and every manufacturer) want to use +x4 port for M.2 SSD so not usable for other PCIe devices.


AMD definitely has a recent high end desktop/workstation platform with its ThreadRipper Pro WX CPUs (EPYC server CPUs) that have 128 PCIe 4.0 lanes and 8 memory channels. And enough PCIe slots to actually use this throughput with the right kind of hardware, like a bunnch of PCIe4.0 NVMe SSDs [1].

Yep, the platform is 2 years old, but supports both the Zen2 and current Zen3 CPUs (but not PCIe5.0 & DDR5 as Zen 4 requires a different CPU socket and is only announced right now, not available from AMD until the end this month).

However, 8 x DDR4 channels and 128 x PCIe4.0 lanes still give you better total bandwidth than 2 x DDR5 and 24 x PCIe5.0 like the few currently available DDR5/PCIe5 platforms support.

[1] https://tanelpoder.com/posts/11m-iops-with-10-ssds-on-amd-th...


The unfortunate part is that Threadripper nowadays is OEM only, thus it is very expensive.

You can just switch to EPYC with hardly any difference in price point.


The Threadripper Pro 5000-series is available for retail purchases [1].

When 3000-series was launched, it was first OEM only (Lenovo) but later became available in retail too.

[1] https://www.amazon.com/AMD-ThreadripperTM-PRO-5975WX-64-Thre...


Oh nice, I must have missed this. Motherboard prices are leaving something to be desired (lowest price is $900), but still, this is good news.

Edit: On second glance, the markup compared to last-gen Threadripper is gnarly. 24 core SKU MSRP is $2.4k, compared to the 3000 series SKU which MSRP for $1.4k.

At that point just go EPYC.


And other creature comforts like decibels, I suspect.


AMD Threadripper got an update this year, and a new release should come early next year. 128 PCIe lanes.


Can confirm, much of my high-compute architecture design work centered around heavy analysis and comparison of existing and promised backplanes. Especially when you throw images in the mix and need gpus, but you need the whole stack to be HC/HT or you hit bottlenecks later.


I'm raise you no working video output on my consumer motherboard! cries in GPU-less AMD CPU


How do all these lanes get into the CPU? There are only so many pins on a chip. Are they time-shared? If so why do you need n physical slots and not just a switched network that lets you daisy chain as many as you can switch? (I’m an architecture lay-person.)


Threadripper comes in a seriously big package. 4094 pins, compared to 1718 for AM5 (the new ryzen socket).


PCIe already is a packet-switched network, but the big and fast switches are very expensive.


It's a demand issue.

Business users get: laptops, mostly. You need a desktop? It's either because you aren't trusted/don't need to take a laptop around or because you need a workstation. The low end doesn't need PCIe slots for anything: integrated GPUs are good enough to do all generic business work, integrated sound, network, blah blah blah.

So we've identified workstations as the business case for more PCIe slots. The AMD workstation CPUs (ThreadRipper and EPYC-P)have lots and lots of PCIe lanes. More than that. The Intel workstation CPUs ... don't. G12 has 16xPCIe 5 and 4xPCIe 4. So you buy AMD this year.

Home users: either you're a casual gamer or a hardcore gamer. Either way, you need a maximum of 2 PCIe slots for GPUs, and you're done.

What's left? People on HN who run server and workstation tasks on repurposed desktop hardware, some of whom buy used server gear and live with the noise, some of whom buy AMD workstation gear


So few buy used hardware. I got a used Dell laptop and the WiFi card was shit. Dell doesn't always make the right call but because WiFi card was a replaceable PCIe card, I was able to get more life out of the machine. On the one hand it was good for me. On the other hand it was bad for Dell because I didn't replace the whole machine. Dell is learning, however. Their new machines have every thing soldered in place with no pesky upgrade/repair options, not even a stray NVME or RAM slot. It's amazing they left a USB port.

I've used PCIe a lot for storage and networking applications in laptops, servers, and desktop form factors. It has allowed for much cheapskating--which is why it's got to go. Repair and expansion options are bad for business.


Curious what replacement card you went with. I know the WiFi/BT are combined, Bluetooth headphones cutting out last night for no reason at all and the thought came around again to flip a replacement in.

Nice to meet another used hardware buyer. I think the total cost of my laptops over the past 10 years equates to roughly the price of a single new high spec Macbook, and that's with repeat replacements due to damage / spills / etc.


I went with an Intel 7260HMW BN for $20. Dual band with Bluetooth 4.0.

USB 3 can trash your WiFi/BT spectrum as many device manufactures provide inadequate shielding. I got some foil tape and carefully lined the inside of an external drive enclosure and fixed some intermittent WiFi issues. FCC should scrutinize USB devices more, I thinks. Probably wack-a-mole though.


If the FCC cared, windowed gaming PCs wouldn't have ever been a thing.


Couldn’t you have a window with a grid or grill, like a microwave?


Intel Wifi 6 is as solid as they come.


The PCIe versions of those cards are widely known to experience frequent microcode hangs (which can also hang your entire machine) depending on "some circumstances" (allegedly linked to 5 GHz 802.11n or something).


> Their new machines have every thing soldered in place with no pesky upgrade/repair options, not even a stray NVME or RAM slot. It's amazing they left a USB port.

I'm gonna shill a bit again and say the LG Gram series belies anyone saying this has to be done for weight / slimness reasons, as even the 1.0 kg 14-inch model has two M.2 2280 slots.


Instead of building gaming rigs, I keep my knowledge and skills current by reviving/reusing/repurposing older hardware. Some new ram, a fast SSD, system is good for another 5 years or more. It keeps an old hobby alive and saves me quite a bit of money in the process.


PCIe x1 for networking is quite common, PCIe mounted SSD aren't unheard of, PCIe USB expansions see pretty common too (others already mention audio/capture cards). Not much else I can think of unless you're getting really out there.


I find PCIe USB expansions and SSDs very uncommon, but I guess my sample size is mostly nerds who also game, not even sure when I last opened up a work-issued desktop PC...


I use a 7x USB 3.0 expansion card for my gaming system, because I hate hot-plugging input devices into my front panel I/O. My gaming uses a mix of peripherals from a Keyboard and Mouse (separate from the ones I use for productivity work on the same PC) to an Xbox controller to a HOTAS & Pedal kit, plus charging cords for my Valve Index controllers that I leave plugged in and routed. And of course high-end gaming headsets usually use USB these days, plus a webcam. I use a lot of ports. :)


Recently I had some USB enumeration loop (turned out to be the Index umbilical connector) that made me disconnect everything one by one. What a marathon. The USB hub dedicated to wireless dongles alone has a wireless keyboard/touchpad adaptor, the one of the Steam Controller and an ANT+ stick. Elsewhere the homebrew stick/rudder is linked up, the Nrf52 devkit, keyboard, mouse, another touchpad, the hub where I connect Android devices for development and occasionally a Garmin for some cIQ stuff. A modified webcam I used to use for infrared headtracking and that cheap portable usb audio with built in phantom power for a condenser mic headset (the firewire audio interface seems to be acting up). The old laser printer is permanently connected whereas the scanner is only plugged in on demand. Currently disconnected are the throttle quadrant, the trim box (with yet another touchpad, still ps/2) and the midi controller. Somewhere there's an arduino configured as an nrf52 programmer. Yes, I need a notebook computer free of most of that stuff to get actual work done.


And if you get 3x full body vive trackers for your index, those each have a USB receiver and you need 3 more ports and they can't be consolidated for some reason. And if you want to charge those from your computer, that's 3 more ports.

I have an anker 5-port USB power hub (no data, not connected to computer) next to my machine for charging phone, headphones, earbuds, etc, and another 10-port hub for charging the 2 index controllers, 3 full body trackers, and 3 track straps with builtin batteries for extended play (for days when you feel like spending more than 7 hours in VR anyways...)


That's a reasonable method but I think most people would use a hub to achieve those goals.


“M.2 NVMe” SSDs are PCIe add-in cards in a laptop formfactor.


> Intel workstation CPUs ... don't. G12 has 16xPCIe 5 and 4xPCIe 4.

Intel's current workstation CPU line, launched a year ago, has 64 PCIe 4.0 lanes. https://www.pugetsystems.com/recommended/Recommended-Systems...


I didn’t realize Xeon had improved that much but his point still stands as threadripper has 128 lanes. AMD is better at the moment.


I believe the red team processors don't treat all PCIe lanes equally, much like the earlier blue team processors did. So that 128 lanes is not 128 lanes. More like 128 lanes with caveats. Though my information may be out of date as it has been a year or so since I last looked at AMD options.


The only caveat I know of is that on a dual socket system you won't get twice as many lanes. 48 (or 64, depends on the configuration) of each are used for the interconnect to the other socket. Then of course the performance of any slot-to-slot traffic may depend on the different paths taken inside the interconnect, but that only matters for very high-bandwidth of low-latency applications.


It is true, the CPU-to-CPU interconnect fabric on both blue team and red team play into that, and it varies between generations and motherboards and chip sets. It gets very confusing quite quickly. Some server boards will let you dedicate PCIe slots to a specific CPU, others won't let you not dedicate certain slots to a CPU.


Unless you're looking at a Zen 1 chip, it should all be pretty equal.


There’re plenty of uses that you may not see: I have a capture and sound card in addition to two GPUs.


I'd be confident in saying that you're an outlier in that though.


That may be true but in an industry that produces millions of new computers every year there's quite a bit of room for outliers. After all someone has been buying all these specialised expansion cards for all these years.

Graphics, sound and networking may have been the biggest reasons and may now be adequately catered for in everyday PCs without the extra hardware. However the high end of all of those markets still needs more than what you get on any basic motherboard and processor and then there are all the speciality cards as well catering for who knows how many niche markets.


No, we're all doing that now. This loss of PCIE expansion is really annoying and I'll probably be skipping this next gen largely for that reason. Highend threadripper is unobtanium and has been for the last couple years. But I have a gpu from like every generation that I wish I could plug into my pcie bus and have it be useful, if I could buy a platform that supported that I would.


I have a PCIe sound card but I have to say these days that I'm actually using a tiny apple usb dongle for my audio, and it sounds cleaner than the internal sound card does. The dongle was also a fraction of the price of the sound card, and way more convenient.


Which of those need to be at PCIe3 or 4 bandwidths rather than USB3 bandwidths?

Which of them need to be at PCIe latencies rather than USB latencies?


Capture cards need the bandwidth. Whether they need the latency is arguable, but they need a lot more latency determinism than USB tends to offer out of the box.


Introduced latency on the capture side makes latency tuning your entire production pretty difficult. For non-real time usage, sure, latency in the 100-200ms range is more than acceptable (assuming it's deterministic, as you pointed out), but in the real-time world? Keeping things within a frame is pretty much required, and with the popularity of software-driven studio workflows across both amateur streaming and professional production, it's been real hard to get reliable performance out of USB hardware that didn't add frustrating amounts latency due to pre-ingest compression or seemingly random amounts of delay due to protocol or CPU time starvation.


Yep, there's a reason I have Blackmagic quad 4K capture cards in my workstation. Syncing multiple video streams with USB capture cards would be nigh impossible even if you put them on separate USB controllers. Ingest over USB is fine (though slow) but pretty much every USB capture card does its own internal compression, as you point out, and then involves the CPU to decompress it and get it into VRAM or DRAM.


Realtime video production is definitely an outlier. You probably want a workstation-class system anyway, with a full TB of RAM so you know that's never an issue.


USB latency should be under a millisecond as far as I know.


This is both only true for small transfers (not bulk/asynchronous transfers) and for the ~99th percentile. Large transfers have some buffer management and handshaking, so they tend to have highly variable latency that has a very fat tail.

The latency degradation for large transfers is so noticeable that most audio DACs (not just ones for gullible audiophiles, also for the pro market) use custom drivers and USB protocols. For 1/1000000th the data rate that a capture card would need.


but is it /consistent/?


USB everything tends to end in dongle spaghetti. And then plugboard spaghetti to hold all of the power adapters. Just the feature of being an expansion mounting rack and power/data backplane is valuable even if it only ran at basic USB 3.0 speed.


I use multi seat on Linux. You need one graphic card per seat. So our whole family shares one computer but everyone have their own work station with monitor, keyboard, mouse and headset. Ohh you also need separate sound cards.


I understand how to do this... but I have to ask, why?


Not the OP, but I've thought about doing something like this, because you can justify a much fancier computer if it's shared. I have no reason for 16-cores, but 16-cores across three people is reasonable, maybe not even enough, so better get something bigger.

Then at least 64 GB of ram, and a sweet disk array, etc.

I did run a dual head Windows environment for a while, but it did have issues from time to time.


Yes, need at least one beefy system for gaming, why not let everyone share it ? Could run machine learning during the night.

Having everyone gathered in the same room is also a feature. But you will want to have a good headset =)

Did try Windows but Windows has no built in support for multi seat, so it didn't work well. Linux however have great support.


> Did try Windows but Windows has no built in support for multi seat, so it didn't work well.

Windows kind of does have built in support for multi seat[1], it's just Microsoft won't sell it to you, and if they did, licensing costs would be enormous for 2-5 people; I was using third party software to enable multiseat, which worked ok for the version of windows it was built for, but hotplugging USB was always an opportunity for calamity, etc.

[1] https://docs.microsoft.com/en-us/windows-server/remote/multi...


> Ohh you also need separate sound cards.

hdmi/displayport audio?


It's a shame that Linux can't do multi seat with a single video card that has multiple outputs. Same goes for the sound card assuming that stereo sound is enough.


I'd like a PCIe card with room for more NVMe drives on it. I'd like to replace my SATA connected SSDs for performance, but I can only stick two M2 cards on my motherboard.


Highpoint and Squid are your goto options in that space. There are others, e.g. dumb cards, if you just want to bifurcate an existing PCIe slot to handle four M.2 drives. But if you want 16 or 64 NVMe drives, the two manufacturers I mentioned have offerings. I have a Highpoint U.2 card, which gives me the option to handle 8x U.2 or M.2 drives (with adapters) in a single PCIe slot.


There are multiple products in that space, such as https://www.asus.com/Motherboards-Components/Motherboards/Ac...



The vast majority of users are happy with a few tb fast storage and the rest on regular hard drives.

The users who aren’t tend to be building servers.


you buy a cheap adapter to plug a m.2 nvme drive into a pci card

and vice-versa (why? m.2 slots have ACS support for pci passthrough)

they're essentially just a wire as it's the same bus


> You need a desktop? It's either because you aren't trusted

What do you mean with "you aren't trusted" ?


An awful lot of people work in a corporate desktop environment where the machines are bolted down. They don't go home with you. You don't have admin privileges, you can't install software, and there's probably corporate spyware installed.

Customer service. Tech support. Inbound sales. Outbound sales.

The company doesn't trust you. If you take a machine home, you're probably going to lose it or break it or sell it, so that's a hard no.


It's also a security risk. They may generally trust you, but may not trust you 100% in the little detail of never ever getting malware on your laptop, which then would invade their internal network. I'm actually astonished that so many companies allow you having a laptop that you take home and that also connects to the company network.


Being “in the network” doesn’t matter anymore because everything has moved from unsecured intranet services to Internet exposed stuff authenticated with SAML. One user having malware is as much of a threat as someone else at the cafe having malware. No longer an issue.


If you were correct in this assertion, then there would not be a malware epidemic.

Ipso facto...


The world isn’t uniform. Quite a lot of companies continue to self-host and take care to not expose their data to the cloud.


Those companies have to continue using bolted down PCs without admin access then. For the rest, take home laptops with full access work fine.


Admin access is fine, you only need to block the USB ports and secure your network and domain policies heavily enough. The alternative to desktop PCs are VMs in the company's data center accessed either remotely or from local thin clients via remote desktop.


I can't imagine how unproductive people would be working under all these restrictions. You walk in to the meeting room but can't plug your laptop in to the TV because it's using a usb c cable. Can't visit the websites you need to get your job done because they might contain software you can install on them. Have to wait for IT to approve everything and just sit around being unproductive.

Just so the company can continue to have insecure internal services.


Actually most of the time with companies that still self-host I see take home laptops with VPN since work-from-home is a must. So not only does the risk still exist, the malware doesn't even have to wait for you to go back into the office.


> The low end doesn't need PCIe slots for anything: integrated GPUs are good enough to do all generic business work, integrated sound, network, blah blah blah.

Although 'low end' may be a misnomer, you do have a point.

But then, if expansion slots are not needed, you can go with ITX motherboards as that's the point. If I'm buying a full size ATX board, it better have plenty of expansion capabilities.


Many a times a workstation ATX board is necessary only because you want tons of RAM. All the card space is essentially wasted PCB.

Since Threadrippers until very recently didn't support RDIMM, you could only put 256GB in them -- but even for that little you needed eight slots. Source: https://www.reddit.com/r/Amd/comments/6icdyo/amd_threadrippe...

It was the TR Pro last year which finally started to support RDIMM. No one released an mATX version though, pity. The earlier platform did have an mATX version but only four DIMM slots, kinda pointless there.

(My laptop has 64GB RAM, so I do not think 256GB on the desktop is unreasonable.)


> Business users get: laptops, mostly.

That's far from universal. The business users I've worked with recently almost all have desktops. A few (mostly management) have laptops for work. I'm sure a lot of that has to do with whether the business is remote-work-friendly. There are still quite a few businesses who are decidedly NOT.


You're forgetting independent creators and artists


To be fair, the chip and system makers have mostly forgotten them too.


I think that falls into the workstation category?


Not really, all you need is a beefy PC or laptop. These guys aren't usually running Xeon or Threadripper


Yeah it really sucks that you can't get a normal computer with decent A/V capabilities (or expansion slots to add them) and instead have to opt for a workstation you don't need since "normal" users apparently only use office.


Not all motherboards have built in wifi, which leaves the option of a USB external adapter or a PCIe card.


The higher end Intel CPU's always have tons of PCIe lanes.


Not even business, since 2011 on the projects I worked on, the workstations if needed, were always some beefy cloud VM.


Demand and capitalism. Bigger, better, and newer.


Your article mentions M.2 to PCIe adapters, and having just worked with a few of these in the lab, some extra information for you:

1. The ADT-Link parts that you see on AliExpress are the most common ones; most other parts you see will be rebadged or resold versions of those. If you're looking to play around with one of these, http://www.adtlink.cn/en/product/R42.html is resold on Mouser as https://www.mouser.com/ProductDetail/DFRobot/FIT0782?qs=ljCe...

2. There's some mechanical compatibility issues with these adapters due to the screws connecting the cable to the M.2 card: M.2 connectors on the motherboard side come in various heights and not all have sufficient room for the bottom of the screw to not hit the motherboard and prevent the card from being fully inserted.

That said, the adapters work great at PCIe Gen 3 speeds. I probably wouldn't expect them to work above that in the general case.


I'm running the kind of eclectic setup that expansion slots were meant for. A board that has three full length PCI-E slots in it, with three video cards.

Running a multiseat machine (i.e. truly independent login stations) pretty much requires a video chip per seat.

It's been an interesting ride. Graphics card fans weren't really meant for 24x7 operations. After a recent power failure, gummed up lubricant caused one not to start again, and the card suffered heat death. Also multi-seat login was abandoned, i.e. broken and tmk never fixed, in GDM, so I have to use LightDM with no alternatives. Also there were stability issues with nVidia cards, but three Radeon cards work fine.

Possibly with the latest hardware, a GPU-per-seat setup could be done with Thunderbolt? Anyway meanwhile we soldier on on the cheap, with a circa 2012 vintage ultra-high-end gamer machine still providing adequate compute power for all the screens in the house.


It is unfortunate that the consumer grade GPUs cannot be shared in Proxmox/ESXi/UNRAID like you can with the "pro" level cards. One of the four major benefits of going with RTX A5000 cards over 3090's was that I can share one or more GPUs amongst several virtual machines, i.e. Shared Passthrough and Mutl-vGPU.


https://github.com/Arc-Compute/LibVF.IO/tree/master/ plus https://github.com/gnif/LookingGlass works pretty well. If you use an Intel GPU, particularly one of their new Arc dedicated GPUs, it supports the functionality on the consumer grade hardware without any trickery and you just need Looking Glass to map the outputs.

If you really want multiple GPUs though you can also use a normal 1-in-4-out type PCIe switch and save a lot of cost on Thunderbolt components in-between. Low bandwidth ones are particularly dirt cheap due to crypto mining volume. Keep an eye out for ACS support or you may have to use an ACS override patch though.


> If you use an Intel GPU, particularly one of their new Arc dedicated GPUs, it supports the functionality on the consumer grade hardware without any trickery and you just need Looking Glass to map the outputs.

Does this work yet? Last time I looked, my understanding was that SR-IOV is supposedly supported on Arc but the software doesn't exist yet, and might not for some time.


I don't think it's due for upstream until next year. You can pull it and mess with it now though if you're willing to buy the GPU from eBay or China. I haven't seen any US/Europe retail postings yet. For most it's fair to say it's not actually available yet though.

I'm sad they got rid of GVT-g with the change though. SR-IOV is definitely a nice add but it has downsides on the resource sharing. Undoubtedly GVT-g was just considered too unsafe and too niche to keep though.


Interesting stuff, but why share the hardware between multiple virtual machines instead of regular Linux users? Each video output port should have its own X or Wayland session.


Typically this is done for Windows guests.


> Graphics card fans weren't really meant for 24x7 operations.

Are those seats highly active all the time? If not, plenty of GPUs have fans that shut off when idle.


I just built a new PC after like 8 years and had to buy a cheaper mobo in order to get access to more PCIe slots. I was baffled why there were so few on them and was wondering if I could even fit 2 video cards on the one I originally wanted. I almost hit buy before I went wait, no something does look wrong, where the hell are all the slots! Definitely a bit of a worrying trend. I expect mobile devices to get shittier and shittier as they remove expandable memory and headphone jacks, but not my precious PCs with so many components to choose from and customize with. Luckily I didn't really need to put any of my old stuff into my new rig so it does indeed look empty. I got that Fractal case that seems like a dream for customizability and it feels like a waste that I'm not utilizing it.


The PC industry has been scroogling users on I/O since the late 2000's, the main entity to blame is Intel with its vainglorious plan to take over the complete BoM for PCs. Now AMD is at the center of the phenomenon.


There may have been a stealthy power move going on back then to kill off the GPU by denying them anywhere to plug in. When Intel successfully killed off 3rd party chipset makers nvidia was clearly quite worried and successfully sued Intel with the result they were required to keep PCIe expansion available on their chipsets with enough bandwidth for GPUs. (This agreement expired some years ago, I think it was only for a decade)

This wasn't an unreasonable worry for nvidia to have. They'd just lost preferred chipset status on AMD platforms when AMD purchased ATi. Intel had released some lower end platforms with only 1x pcie connections (nvidia had responded with nvidia Ion which could work over such a tiny link) and AMD was talking big about their APUs.

It was a reasonable fear back then that Intel and AMD, with a tightened grip on their platforms and their own integrated solutions competing in the same space, might choose to just cut off nvidia's air supply by flooding the market with chipsets that didn't have the connectivity Nvidias cards needed.


> The PC industry has been scroogling users on I/O since the late 2000's…

What does this mean? (I'm familiar with the Microsoft ad campaign, but that doesn't make sense to me in this context.)


Unfortunately it's an obscure topic because people have been so dependent on laptops and phones that few people know about the regression of desktop PCs.

Here's a simple example.

I found an old i3 PC from Intel in my house that was left by one of my son's friends. The CPU is artificially limited to 16 PCI lanes.

You might think you could plug a discrete GPU in, it only uses 16 lanes, but no, some of those lanes are consumed by storage, USB ports, super I/O, etc.

So this computer is e-waste now because it can't be upgraded to keep up. This kind of delinquency is only possible because Intel has pushed barely-functional integrated "GPU"s.

Back in the 1980s and 1990s you had to know some rules about how interrupts were assigned to slots to build a working PC. Since the early 2000's, PC builders face a number of barely documented rules about how PCI lanes are assigned which boil down to "i3 and i5 are e-waste, buying a budget or mid-range motherboard is a waste of money because upgrading your machine will be a matter of 'you can't get here from there'"


Even with 8 lanes available a GPU should work fine. If it's a very high end GPU it won't run at full speed but this is a old i3 so probably not a issue. Some low end GPU's only use 4 lanes to begin with. Did it actually fail to work when you added a GPU?


I think the other poster is... exaggerating his one-time experience to everything.

I personally did "x16-x1 mod", ie removed the plastic from the connector so a x16 video card could be installed in a x1 slot (and free up the x16 slot for a network or RAID card, don't remember atm). Video card worked fine. It's up to PCI-E standard to actually use any amount of lanes available.

https://en.wikipedia.org/wiki/File:PCIe_J1900_SoC_ITX_Mainbo...


Yes


I think the question was more around the use of the word “scroogle”.


Yes! From the interesting answers I think "scroogling" was used as a synonym for "screwing" or "gaslighting". I've never seen it used outside of the Microsoft ad campaign, so it's interesting to see it in the wild.

> scroogling


I think it was a typo for scrooging which is a common enough word for being cheap(USA) / tight (uk).


In the 80's PCs and software ate the world, and one of those reasons (apart from DOS/Windows) was that anyone could put something on a PC expansion card which more or less directly spoke to the CPU and extend the hardware.

So the PC acquired a very diverse set of cards and was able to do heavy processing in a very diverse set of roles - from medical equipment, cash registers, MIDI/music composition, graphics, publishing, sound, networking, etc. No business was untouched by the PC in the 80's--the ability of anyone to make hardware for the platform was a significant part of that being as cheap as possible.

PC hardware seems to be slowly moving towards being like a cell phone - everything onboard/builtin, ports picked by the manufacturer (you will get 1 USB-C port and like it), and if you want to add something, your primary option is a USB-type port of some sort - with its own world of firmware, controllers, etc.

Look at a smartphone and a modern ODM-designed cheapo laptop motherboard - they're very similar.

If motherboards stop being expandable it might mean the PC hardware ecosystem isn't going to be able to innovate except at the behest of high-capital firms like Intel, AMD, etc. That might be OK if we are truly at the zenith of what's possible with a PC, and really want to give the keys to the kingdom to those firms.

And who knows - I know somewhere in there USB4/Thunderbolt on some level is able to move PCIe traffic, so maybe it won't be so bad.


> That might be OK if we are truly at the zenith of what's possible with a PC, and really want to give the keys to the kingdom to those firms.

No, we don't, and they've already shown we can't.

Intel kept us at 4-core shit-tier processors for a decade, and the moment AMD managed to get a leg-up on Intel, they killed their HEDT platform because it competed with their server platform in PCI Express lanes and turned their back on all the gamers that brought their sorry ass back from the brink of death.

Both of them have been shown to be greedy, opportunistic shitbags of companies that cannot be trusted.


USB3, USB-C is no answer.

Back when the USB1 spec came out it said you could plug 127 devices into a port through a hierarchy of hubs.

With three different laptops and a plethora of USB 3 hubs I found the system would support only a limited number of devices in the tree. If you plugged in too many devices (somewhere between 3 and 5), plugging in an new device would cause an existing device to drop out. It's just annoying if it is a keyboard or mouse that gets dropped out but if it is a storage device data could get corrupted.

I looked at the USB 3 spec and couldn't find any guarantee that there was any number of devices you could plug into hubs and have it work.


> I looked at the USB 3 spec and couldn't find any guarantee that there was any number of devices you could plug into hubs and have it work.

It's a controller and/or BIOS limitation[1][2]. The complication is that while USB devices can be addressed using 7 bits = 127 devices total, each device usually creates more than one endpoint, and each endpoint consumes controller and system memory resources. The BIOS allocates the memory, and the amount is apparently hardcoded (guess a setting would be too difficult). If you have many USB 3 devices with a lot of endpoints, that memory runs out quickly.

In addition, each endpoint reserves some bandwidth, so the uplink needs to be able to provide that bandwidth.

[1]: https://community.intel.com/t5/Embedded-Intel-Core-Processor...

[2]: https://borncity.com/win/2019/09/06/windows-10-not-enough-us...


Now that you mention multiple endpoints, I noticed (admittedly, this is in USB1/2, which seems to be a totally separate system from USB3) my KB shows up as multiple HIDs. Probably for RGB and nkey rollover. But I wonder how it is implemented? Is there an internal hub (adding one more layer of hubs, ontop of whatever is done internally in my PC before the physical USB port - a lot, if lsusb/lspci-tree-list is to go by)?



Quite honestly USB 3 has always been flaky for me, I avoid using USB 3 hubs because they don't work reliably anyway, and on many cases the front USB 3 ports cause errors because... I don't even know. To say nothing of using longer cables than 50 cm. Back in the late 2000s USB 2.0 was similarly troublesome as USB 3 remains today. Slow USB 1.1 devices always seem to work though.


I think on desktops most users have enough io, and on laptops the Ultrabook type trend is not particular to and or Intel.


The rise of laptops seems like the big one to me. Why would I make a device in a PCIe format and alienate a huge chunk of my market, instead of using USB-whatever?


With lack of expansion cards combined with m2 drives modern PCs are looking completely different to just 10 years ago. Just a big motherboard with a video card and a massive cooler. Those big cases look mostly empty. All the wiring is just for RGB now. :)


Even spinning-disk hard drive arrays (another good way to fill out a large case) don't get much love since SSDs have gotten cheaper and people rely more on cloud storage. My next rig will, at least, continue to have an ever-growing number of 18TB hard drives as a huge volume managed by Windows Storage Spaces.


Roswell still makes 24x bay 4U cases, and there is backblaze.


Most people just don't have that much data. Those huge HDD arrays were mostly used for pirated media but these days most people are on streaming. Even for games, it's usually easier to just delete the games you aren't using and download them back when you want to play them.

Many modern cases opt for only one or even no HDD bays because it makes the case look cleaner, smaller and gives it better airflow. Gone are the days where most cases had top to bottom bays of DVD and HDD slots


HDD arrays for “Linux ISOs” have never been better. In fact, I’m pretty sure that modern day is something of a golden era thanks to the dropping prices per TB on spinning media.


Yup. Currently building a new storage setup with 18TB disks. Each of my 18TB HDD is also backed by a 512GB NVME cache to speed the already fast HDDs up.

Distributed raids (like ceph and glusterfs) and raid expansion (recently added in zfs) also make it easy for a homelabber to get started cheap and continuously add new disks.


Right, my new PC has an m2 ssd and a NAS in the closet.


Yup I ran out of PCIe slots in my last computer-

1x Bluetooth+Wifi adapter (offload the USB bus)

1x Highpoint USB 3 controller (the only USB3 controller that is reliable for work)

1x Quad network ports adapter

1x Video card

Things I wanted to install but couldn't-

1x M.2 nvme adapter card

1x Video card


Consider yourself lucky. During the GPU apocalypse I managed a Newegg Shuffle for a GPU that included this motherboard: https://www.newegg.com/gigabyte-b550m-ds3h/p/N82E16813145210

Fast-forward a bit and I used it to build my kid a new machine, except my old RTX 2060 was a "2.5 slot" card which means it's literally the one and only PCIe card I can install. I had to get him a USB wifi adapter...


> ...During the GPU apocalypse...

@duffyjp : Maybe i'm mis-reading your comment here...but are we out of the GPU apocalypse yet? Genuinely curious, because I've been holding out getting a small desktop PC for homelab use. (Yes, yes, i know for server homelab stuff i don't really need a GPU, but GPU pricing i think tends to portend overall computing cost nowadays.)


The shortage is basically over, and nvidia severely over-purchased fab capacity from TSMC for the 4000 series. Prices have tanked and will likely keep going down unless nvidia can sell off the 5nm fab time.


Expect GPU prices to go down in a little over a week from now


USB has a non-trivial CPU overhead compared to PCIe.


The argument there is that you should probably have looked at a workstation CPU and board, e.g. ThreadRipper.


Threadripper has worse single-core performance than Ryzen, lacks 3D V-Cache, only came in a Pro option last generation (which was vendor-locked to Lenovo for months), and has been removed from next generation's roadmap.

As I mentioned in the "Slots vs. Lanes" section in the article, most home users don't actually need more lanes -- just more slots.


> most home users don't actually need more lanes -- just more slots

Indeed. I really don't understand why they can't have x16 slots instead of those useless x1 slots.

My current motherboard has two PCIe x1 slots, running either 3.0 or 4.0. Plenty of bandwidth for lots of stuff. But they're useless because every expansion board has a x4 or larger connector. Previous motherboards have been full of useless x1 connectors as well.

I know there are exceptions, but x1 is certainly the norm.


Open back 1x connectors do exist, then you can put whatever in them. Would be nice if motherboard makes would use those.


You can also use a dremel if you feel adventurous


Do you have enough lanes overall? I wonder if a splitter might work for your needs.


This has been going on since the second IBM PC (probably). On the original PC even RAM expansion was on an ISA card, and there were cards for serial, parallel, floppy disk, CGA, MDA and probably more. Those functions were gradually moved first to the motherboard and then into the processor until it became the SoC we have today.


The original 5150 PC had 5 slots. The followup XT had 8 (one nonstandard) because 5 was skimpy, especially before high integration-- if you needed a whole slot for just a serial port or a battery backed clock, and seperate cards for floppy and hard discs, they ran short fast.


...and instead of USB 5.0, we have USB 4, version 2.0, probably so they can snicker when we realize it's USB 420...


Ha! I even snarkily made fun of the USB naming structure in the article, but I didn't catch that the Promoter Group embedded a 420 joke.


I have a large graphics card that takes up 3ish slots of PCIe on its own even if it's only taking the 16 lanes it gets.

There's room at the bottom for a wifi card with an external antenna that is connected by wires to the rp-sma connectors on the card.

That's it. That's all the room k have on a full size tower. I used to have a FireWire card fory audio interface but replaced it with a usb-c model to not worry about FireWire support in current year.


That's another thing that bothers me about motherboard manufacturer's choices around PCIe -- one that I didn't cover in the article. Why on earth are they choosing (even on EATX motherboards) to put the extra PCIe slots within 1-2 spaces of the primary GPU slot, when they are so incredibly likely to be covered up? Instead, that's a great place to put the M.2 slots. Move the few remaining PCIe slots down, so they can actually be used.


Unfortunately you cannot actually "move the slots down" due to how motherboards and CPUs work and still keep everything compliant and correctly timed. You can put in a PCIe extender, stiff (daughterboard) or flexible (ribbon or bracket) which works, but you are playing the game of "it might work, it might not" and juggling three variables of motherboard timings/quality, PCIe card timings/quality and extender quality. You don't know if it'll work until you try it. There's multiple reasons why those slots are where they are.

I have two ASUS dual CPU workstation motherboards and run some of the devices with PCIe ribbon extensions. I cannot run my Highpoint U.2 RAID card on an extension but I can run a Highpoint USB 3 card just fine. I can run a 2070 GPU just fine, but the RTX A5000 GPUs are flakey. The dual Blackmagic quad 4k capture cards want to be in specific motherboard slots and work okay on one particular brand (Thermaltake) of extension cable.

The problem is nuanced.


m.2 can get hot, installing them under a hot video card cooler is a judgement call, especially if there is more space elsewhere. The truth is most people probably don't use their 1x slots.

I found some ultra low profile pcie extensions (basically mining risers that use a USB3 cable) that allow me to relocate the cards elsewhere but still install them in slots that are under big GPU fans.


Those extensions sound interesting, can you share a link or model number?


As mentioned, there is a wide variety of these available of different types. They were made popular and available enough by the mining boom that there are different styles (and different qualities).

This was the one I used: https://www.amazon.com/dp/B07N38Y799


There are M.2 extension cables, I have one that lets me put the M.2 in to the drive bay away from the motherboard. There's also M.2 bifurcation cables. And you can also put the M.2 on PCIe cards, with or without a PLX switch.


The one I use, MSI X-570-A PRO [0], has space for a 2-slot GPU (are they 3 slot nowadays? I don’t know, I don’t do AI or shooter-games), and then 3 x1 and one x16 slot which seems pretty okay.

[0]: https://www.msi.com/Motherboard/X570-A-PRO/


> are they 3 slot nowadays?

In fact, some even clock in at 4.3 slots. https://www.asus.com/Motherboards-Components/Graphics-Cards/...


That card looks like it takes up 2 slots externally, but then has really tall fans on it.


The 2 or 3 top end Nvidia and AMD GPUs have been 3-slot for at least a couple years. I bought a Dell 6800xt on eBay that was notable for only taking up 2 slots.


Crazy. I only recently updated from an RX470 to an RX6600


Most consumers only have 1-2 NVMe drive and a GPU. So I presume OEMs don't want to have an M.2 slot buried under the GPU.

On AM4 ATX-boards slot 0 is often then primary M.2 slot. Slot 1 the GPU and slot 2 is either empty, or the CMOS battery, or like you suggested a secondary M.2.

So you're already down to a maximum of 5 PCIe slots.


They're doing it because it allows them to sell 'workstation' motherboards that do have decent slot spacing at ridiculous prices.

I had to resort to a mining rig style setup with 16 lane extenders to be able to properly utilize all the slots after running into the same issue.


I was bewildered at this because I couldn’t imagine anybody who would find 5 NVME SSDs and two PCIe cards more useful than the inverse

Never tried running a 4x2TB striped NVME PCIe 4.0 scratch disk? This stuff just lets you forget about what used to be a bottleneck. No need for RAM-drives.


Unfortunately most mainstream boards have one PEG / PCIe x16 slot, one M.2 slot and maybe 1-3 PCIe x1 slots (usually x1, even if they are x4/x16 mechanically). Those ancillary PCIe slots are generally multiplexed through the chipset and 1-2 generations behind the CPU's PCIe generation, so they aren't even good enough for networking or another SSD (not much faster than SATA, so what's the point).

Even USB-C remains a niche feature on the latest socket 1700 (Intel) and AM4 (AMD, presumably extending to AM5), despite being standard on laptops for years. Part of this is because graphics are often not on the motherboard, so the motherboard has to include a DP-in socket to support graphics out over USB-C.


I do miss expandability and features. Nothing quite matches the look of nerd envy I got when I popped a folding bluetooth mouse out of my laptop's ExpressCard slot.


> ...a folding bluetooth mouse out of my laptop's ExpressCard slot...

Say what!?! How have i never heard of this until today!?! @causi Would you mind sharing a link to whatever cool mouse you had? Thanks!


Sure. The original HP RJ316AA PCMCIA version was probably more comfortable, but I had the MoGo X54 ExpressCard model which was still pretty good. Both of them charged via the card slot. I wish laptops still came with an ExpressCard slot just for that mouse; I hated giving it up. Plenty of pictures and data on them if you Google the names.


Thanks for sharing!


With the way things are these days I'd settle for a mouse of similar flat dimensions and folding design that adhered magnetically to the lid of my laptop.


Here’s the PCMCIA version Causi mentioned; I had one of these back in the day: https://the-gadgeteer.com/2006/07/27/newton_peripherals_mogo...


My PC I just built, a Ryzen 5950x with an X570S chipset, has the least PCIe slots I've ever had on a PC. (3) Plus... if you use the second slot, it cuts your graphics card down to x8 PCIe lanes, so no way !!! And if you use the third slot, it disables one of the on board NVME drive slots. This I had no choice. So, in the end I had room for the graphics card and a single 10 gigabit ethernet adapter.


Cutting down lanes sucks, but your GPU should still work fine on x8, with only minuscule difference (or none at all).


Oh the days of buying PCIe expansion chassis to get the cards required to fit into your system. When 3 slots were not enough, boom, now you have 7 in just twice the square footage and twice the power needs.


serdes .... it's serdes all the way down ....

There's a growing trend in SoCs of just having serdes blocks - because these days PCIe, high speed ether, USB/thunderthing, sata etc etc are all just variants on the same high-speed technology - different protocols are just different MACs on top of a common serdes block


Interesting, can anyone point out some SOCs with multiple serdes? As an interested, non embedded developer, I know high end fpga do, but I'm not aware of any SOCs.

Also how hard is it to layout pcb for these? Is length matching sufficient?


Length matching with a ground plane on the layer below will get you to USB 3.0 or PCIe 2.0 maybe. Maybe higher speeds need special PCB surfaces? Vias embedded in BGA lands? Not sure. Probably also would want to do simulation of the EMF for all the traces on the board, and software for that is "email us for a quote" expensive.


Oh yeah at these speeds everything is a microwave stripline


One point regarding slots on a motherboard though is space. There is only so much 2D space on a board. When adding in M.2 slots the PCIe slots immediately took a hit.

For me, I like the M.2 slot on the back of the board. I'd like it more if it were a standard so case makers can assure access.

Also, I understand it might be a bit fragile but is there any reason why M.2 slots can't stick straight up and out of the motherboard? Maybe with some support around whatever you plugin. Sticking straight up and out would provide all around cooling for the stick of whatever AND save motherboard 2D space.


This is on the heels of "what happened to the discreet sound card?"

Let me ask you this: What is left to plug in? Most removable gadgets are USB because USB is fast enough. The devices which actually need the bandwidth, latency or memory mapping are already on the motherboard. And the people who need more are a minority.

The only real use case for PCIe to the average PC user is for connecting a GPU or NVMe. Moving forwards I see the converged hodgepodge of USB4 and beyond killing the x1-x4 PCIe slot with servers being the last hold out for high slot counts.


We're being nickel and dimed and upsold on high-end workstations for something that another poster in this thread already established was cheap... right up until Broadcom bought the company that made the cheap alternative chip.

It's so fucking American, isn't it? If you can't beat someone with your technical superiority, buy them out.


Happens all the time! I was in a startup bought out to kill our product.


This is interesting, but honestly a little pointless, and misses the real issue.

I'll address it line-by-line.

> Lack of USB ports

My motherboard has 8 USB ports of varying capability on the rear I/O block. It has two internal headers to add another 8 USB ports. That's 16 ports. It's not even a very expensive or feature-rich motherboard... it's an MSI X570 Gaming Edge Wifi... $209.99 on release date at MicroCenter in 2019.

> Thunderbolt ports so you can attach monitors, USB devices, and even externally-mounted PCIe cards with a single daisy-chainable cable

Who exactly is doing this, when most video cards have 2 or 3 DisplayPort connectors on them...? External PCIe cards? For what purpose?

> WiFi cards so you can get higher network speeds and lower latency without a cable

Present on most motherboards, even budget options around $125. Nowadays, you can even switch the card out by unscrewing the shield, unscrewing the wire tension screws, and popping in a new M.2 WiFi card. Why suck up a PCIe slot when a small M.2 WiFi card slot fits?

> Network cards so you can have additional and/or faster Ethernet ports

Already built into every modern motherboard. Many have two. Higher-end boards have 10gig NICs. Some have two. Why would you need more than two NICs in your machine?* (See below)

> TV tuners so you can receive, watch, and time-shift local over-the-air content

This may be the single place where the author has a point, since the only way to view OTA content via stream is to either pay for Hulu, DirecTV Stream, Fubo, YouTubeTV, and then input your zip code.

> Video capture cards so you can stream or record video feeds from game consoles or non-USB cameras

There are plenty of excellent USB devices to accomplish this, you don't need a PCI Express-based solution.

> SATA or SAS adapters so you can attach additional hard drives and SATA SSDs

Almost every motherboard, certainly every ATX motherboard of the past 10 years or so I've seen, has standard at least 6 SATA ports. Smaller ITX boards may only have 2 or 4, but if that's your design, you've already committed yourself to a lack of PCI Express slots.

> M.2 adapters so you can attach additional NVME SSDs

Most mid-range motherboards have at least two of these. High-end motherboards have up to 4. This still doesn't solve your primary problem that I'm going to address here in a moment.

> Sound cards so you can run a home theater surround sound system from your PC without needing a receiver (RIP Windows Media Center)

Realtek cornered the market here with onboard audio, and it is actually surprisingly good for their higher-end options. Yes, you could get an older HT|Omega card, or a SoundBlaster AE-5/7/9, but why?

> Legacy adapters for older devices that use serial, parallel, or PCI (non-Express) to connect to the computer

I'm not even sure what you'd attach that's that old... and more importantly, why...

**** THE ACTUAL PROBLEM ****

Everything the author of this blog pointed out is an issue, but most, if not nearly all of them, have been solved by motherboards integrating more and more features onto them. This is the reason I don't get salty about buying a new $400 motherboard from ASUS, or MSI, or Gigabyte or whoever. I get a high-end sound card (usually a Realtek ALC4080, which supports 7.1 channel surround sound + optical S/PDIF), a 2.5 to 10 gig NIC, WiFi 6 or 6E, 8-12 USB ports including Type-A & Type-C, multiple M.2 slots (some motherboards support up to 5 now).

Turn the clock back 20 years or so. Hell, go back to 1997, when the first WiFi router was sold.

You'd have to buy your motherboard. You'd have to buy your NIC. You'd have to buy your WiFi card. You'd have to buy your sound card. That's four components in one. But wait... most motherboards had TWO... at best FOUR... USB ports. So you need to buy at least one or two USB hubs to get up to the 8 to 12 ports modern motherboards have on their rear I/O connectors alone.

No.

The problem is not "lack of PCI Express slots".

The problem is lack of PCI Express lanes. How do you expect to drive all this awesome shit you wanna throw into your tower case? The MSI Z690 MEG UNIFY has two PCI Express 5.0 x16 slots and one PCI Express 5.0 x4 slot. And guess what... your single x4 slot is off-limits, because you're gonna buy a Samsung 990 Pro NVMe SSD (which is gonna use those 4 lanes), and will certainly saturate it with it's 1,600,000 IOPS and 8-9000 MB/s transfer rate.

SLI and CrossFire are still supported technologies, even though, frankly, no one could afford to use them in the past two years, but that's changing. The cratering of cryptocurrencies and the uncontrolled dive that GPUs are currently experiencing means dual video card solutions may be back on the table for gamers. Hell, I've seen used RTX 3080s sell for $440 on eBay by the time of auction expiry. Sub-$1000 for dual RTX 3080 still outperforms a single RTX 3090 Ti. But even those are dropping like a shit from heaven. I saw one for sale on eBay for $1059. $2120 for 100+ FPS at 4K, with ultra settings, and ray-tracing activated? Why not? Lots of people were buying single RTX 3090s for $2400 just a year ago.

No, the problem is lack of PCI Express lanes available.

We thought AMD had rode to our rescue with Threadripper. 64 to 128 PCI Express lanes "ought to be enough for just about anybody". Enthusiasts bought Threadripper to easily run dual GPU setups and dual or quad NVMe setups.

You didn't even have to "go nuts". You could pick up a Threadripper 2950X for it's 16 cores for as little as $799 on release. It gave you 64 lanes. The 3960X supported 88 lanes... Hell, even with dual GPUs and quad NVMe drives, you still had room for another 40 lanes of equipment.

No. The problem is not the lack of slots. The problem is the lack of lanes.

We thought Lisa Su had rode to our rescue against Intel and their stingy horseshit antics, only to find out she's not only just as bad, she's arguably worse... because after being the also-ran, far-in-the-distance, second-tier shitpick of a brand, AMD turned their backs on the gamers and enthusiasts that brought the company back from the brink. Threadripper is now priced so far out of the reach of the enthusiast as to be a non-starter.

But that is your real problem. The lack of PCI Express lanes. Not slots.


>Almost every motherboard, certainly every ATX motherboard of the past 10 years or so I've seen, has standard at least 6 SATA ports.

Except that the SATA ports on any motherboard are not equal. And you didn't even address the SAS point or highspeed U.2 drives for broadcast or data capture or...

> There are plenty of excellent USB devices to accomplish this, you don't need a PCI Express-based solution.

Evidently you don't work in broadcast or computer vision or machine vision inspection or...

> Realtek cornered the market here with onboard audio, and

Evidently you don't work in broadcast, or use a DAW, or desire higher grade audio, or special audio processors or low-latency MIDI...

>> Legacy adapters for older devices that use serial, parallel, or PCI (non-Express) to connect to the computer >I'm not even sure what you'd attach that's that old... and more importantly, why...

Evidently you don't work in robotics or machine vision inspection or industrial control or industrial logging or medical devices or...

Before we go any further, USB is very consumer grade tech. We know this because it can be unplugged or work itself loose under vibration. There are locking connector options, but they are for the most part completely inadequate. USB is also prone to incredible cross-talk in noisy environments.

I've touched all of these areas in my career, and they all require those connections you are so quick to dismiss out-of-hand. This is an incredibly parochial and naive mindset on exhibit here.


"I've touched all of these areas in my career, and they all require those connections you are so quick to dismiss out-of-hand."

I hear you, and I'm usually first to remind people that their own use case is anecdotal and irrelevant.

However, are you really suggesting that the use cases you mentioned make up a large enough percentage of the whole to warrant manufacturers catering to them?


Yeah, we're talking multiple multi-billion dollar industries. That if the main players won't service them will be serviced by niche players who do. You won't find a USB anything version of an SDI capture card worth (they exist, they're just universally not good) a damn or used in a professional broadcast environment because USB, by de facto, doesn't lock and is temperamental. You will struggle to find a USB multi HDMI capture card. Forget about putting multiple USB capture devices on the single shared USB connection integrated into your motherboard. A motherboard with 12+ ports usually has three distinct USB controllers, only one of which is worth a damn. There are PCIe machine vision capture cards that have onboard GPUs and dedicated co-processors so that the machine vision algorithms can run directly on the PCIe card and never involve the CPU nor have to move the captured video across the bus to main (usually far slower) DRAM. USB has incredibly high latency, and more importantly, non-deterministic latency, which is why USB MIDI on the desktop is fine for casual use, and lousy in an event setting or a professional recording studio.


USB has incredibly high latency, and more importantly, non-deterministic latency

Seeing that RME's USB 2 interfaces manage to stream like 50 or more 24bit audio channels at 48kHz with buffer size small enough to get latencies in the mSec ranges, I always wonder: are other manufacturers just doing it wrong? I know that doesn't completely cover the non-determinism argument, but 'incredibly high' seems to be covered pretty well.


The RME-Audio devices do indeed have low latency, at the limits of the USB 2.0 spec, 125 microseconds I believe. They crank up that USB poll rate. And are also using the Arasan chipset IIRC, the same to be found in some of the other prosumer and pro line-up of equipment, e.g. the Solid State Logic h/w. I am hazy on the details, it has been a few years since I was inside any of those devices, people from RME and SSL please feel free to correct me as to your chipsets. Some are using dedicated FPGAs to handle the data capture and processing before handing off to USB. RME's devices are definitely doing a bunch of on-board processing before giving it to the USB bus, and making sure the packets going out are as small as can be. Most non-integrated USB controllers are using VIA or Renesas. USB 2.0 has lower latency but less consistency (shared bus) vs USB 3.0 which has higher latency but is more consistent (point-to-point protocol). Obviously you don't want to go sharing your USB 2.0 port on your PC with an RME and a bunch of USB 3.0 devices, e.g. an external drive, because then you just end up with the worst of both specs, terrible latency and terrible consistency.


If you went into Best Buy and asked 100 people this question do you think they are wanting that?

Your use case scenario is very much available from every major computer systems manufacturer, it's just list available under the "Professional" "Workstation" category, not the "consumer/prosumer/a computer is a computer".

If you're saying you don't want to spend the money I get that, but between assuming your use case is the majority.

SAS is not consumer. Multi HDMI capture is not consumer. But all these solutions are available in a different market, which are also much high quality.

EDIT: you also mention U2 drives... If you're expecting to use that in a consumer PC you might want to check your expectations. Maybe 15 years from now they will be the standard but they're firmly in the enterprise market.


You kinda wasted your time there, since I work in one of those very industries and know most (if not all) of this. I just asked whether you really thought it works be worth the major players' time. The use of the word 'niche' was telling.


"Niche" is relative. $500B market vs $5B market. $5B is still $5B. A couple of those and it can start to add up to real money.


Pity I can't downvote you. You deserve it.

You didn't read the whole post, especially the most critical part.

Lack of PCI Express LANES is the problem, not slots.

All this great shit you're talking about is wonderful... as long as you have the PCI Express lanes to support all those add-in cards.


I did read your entire post. And the OP post.

I have Intel Xeon Skylake processors, 48 lanes of PCIe 3.0, all of equal priority so they aren't tiered like in some of the earlier blue team processors or the red team processors. Across two CPUs that gives me 96 lanes. The slots are switched so either of the CPUs can use an individual PCIe card, or I can assign a PCIe card to a specific CPU. Alternatively if there are not enough PCIe slots I can run the PCIe cards through a separate switch. There are a lot of non-consumer grade motherboards out there that offer these features. It is rare that any deployment will require all 48 lanes on a CPU to be maxed out simultaneously, though I've been involved with use cases where that has taken place (8x 4K non-compressed video stream captures directly to storage), though what happens then is you run in to DRAM bandwidth issues and other problems.

In my workstation I have dual RTX A5000 GPUs, dual Blackmagic quad 4K capture cards, dual Highpoint USB 3.0 expanders with four separate USB controllers per board, a Highpoint M.2 RAID controller with on-board PLX, along with the onboard M.2, six channel U.2 through a switch, onboard USB & SATA.

What we should be asking for is better switching, not more lanes. Again, it is a rare use case where we can max out the bandwidth of all PCIe lanes of a CPU. We should also be asking for better switching for the direct DMA between cards which is a sorely neglected area across all architectures/motherboards.

P.S. Forgot to mention the NIC PCIe card.


> Who exactly is doing this, when most video cards have 2 or 3 DisplayPort connectors on them...? External PCIe cards? For what purpose?

This is used extensively in laptop docks. If my desktop motherboard supported it, I'd hook all my peripherals up this way. Instead, I have a KVM with desktop on one side and laptop dock (for work and personal laptop) on the 2nd KVM input. A single Thunderbolt cable goes to the laptop from a CalDigit dock

With Thunderbolt, I could theoretically even share my GPU between machines if I had an enclosure (and desktop correct ports)


Did you read the whole article? I covered the slots vs. lanes issue in the third paragraph, and debunked my own list of expansion card types in the second half of the article.


> Why would you need more than two NICs in your machine?* (See below)

I've always wanted to build an 8-port or 16-port software Linux-based switch. Just for fun.


I can see a usecase for all of them. Just not all in the same case. Connecting a capture card and a legacy port? Maybe, but USB might suffice too.


For audio you often get a ton of electronic noise on a PCIe card that isn’t present with an external device. Even the few PCIe still around use breakout boxes.


The real awakening for audiophiles was when someone bench marked a lot of devices and found the $10 Apple USB-C to 3.5mm adapter was outperforming these expensive boxes and PCI cards.


If you step up to an RME you’ll know the difference.


"The Death of the Consumer PCIe Expansion Card". just a friendly reminder that in the data center world pcie isn't going anywhere soon


Shout-out to the 2280 M.2 slot, fantastic functionality for smaller footprint items like chip carrier boards, like NVIDIA Jetson chips. For those who don't know, check these out: https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...


I personally find this sad because multi-GPU is an excellent way to optimize compute/watt for enthusiast scientific computing (Folding@Home etc).

Back when I was still contributing power to SETI@home I analyzed the results database and concluded that by and large GPUs did substantially more work/watt-hour and that the mid-range parts were the ideal price-point (GTX 1070 at the time); my best rig had 4x GTX 1070's in it which is an excellent balance in many respects. It's a dinosaur now, and when it finally croaks I don't know if I'll try to replace it.


"The Death of the PCIe Expansion Card" seems like an extreme description of the latest set of motherboards having "fewer than five" PCIe slots.

Yeah, the numbers have shrunk over time, but I think that's probably due to USB and perhaps to some extent HDMI making an expansion card unnecessary. Also, motherboards have enough hard drive controllers in them now that most people who would add a hard drive to their PC have no reason to need an expansion card to do so.

I don't particularly like the trend either, but I think PCIe is far from dead at this point.


It's not just a description of the latest set of motherboards having fewer than five PCIe slots. It's a description of the PC expansion market as a whole, including the fact that many things that would've been PCIe expansion cards "back in the day" are now built into motherboards, sold as USB/Thunderbolt devices, and/or available as M.2 cards. The journey of discovery it took for me to realize this was simply kicked off by the dearth of PCIe slots on announced AM5 motherboards.


Yeah, I guess what I was trying to say is, "PCIe isn't dead, it just smells funny."

:-D


I haven't used more than two PCIe slots for a long time now. My current asus mobo has ... 4 I think, and I use two: one for the evga gpu and another for a usb 3.0 card I had lying about.


Since every board started coming with built in Wifi it's been a while since I used any more than 1. I'm not sure what all the panic over PCI is when all the stuff we used to use these cards for is just built in now.


I really like the "Ship of Theseus" take. My own desktop PC still has that one slot cover from my first PC from 1990s. This is a deliberate nod to all the history of upgrades and migrations that was so much fun to do.

By the way, my own modus operandi was to choose a mid-level hardware for upgrades, exactly to keep the budget low by keeping as much older parts as possible.


This is just motherboard manufacturers segmenting their market and overcharging the high-end so they can reduce costs on the low-end. If you need four PCIe 16x cards you really NEED it, and you'll pay $300, $500, maybe $1000 per board to get it.


This article title is BS. PCIe is alive and well. We just need it for less things because a lot of good stuff is already on board and it now appears in more form factors namely Thunderbolt and M.2.

We will still need it for graphics, fast SSDs, fibre channel and 10G Ethernet etc


The title is accurate, though. The use of PCIe is exploding everywhere, but we're putting it in every other interface and the "PCIe expansion card" connector and form factor is seeing relatively less use overall.


Because everything is standardizing. You don't need ports for TV tuners or proprietary camera memory cards anymore. Everything you do need is now built in to the mobo.


It's not 'dead' at all, that's what I was referring to. It's not going anywhere either for the remaining usecases.


An the flip side this does open up some up-cycling opportunities on the prior gen being retired.

Take old gaming PC, throw out GFX sitting in the x16 gen4 slot and replace it with a bifurcation card for nvmes. Should make for a decent proxmox machine with has zfs storage


Still popular in some areas of pro audio and video industries. eg. Avid HDX and UAD DSP cards can eat up a few slots, combine that with a SSD and GPU and suddenly your 2019 Mac Pro is full up.


I remember back in the original nvidia GTX Titan era (2013 ish?) it wasn't uncommon for TOTL gaming rigs to have 2, 3, even 4 GPUs. At some point that just ceased to be a thing


First batch of AM5 mobos are high-added value, makers want to make most money on selling buzzwords. Normal consumer grade mobos (with many PCIe slots) will propably follow latter.


Oh this can only go good places long-term and never end badly.

** mainly Intel!


and here I am wanting a modern computer with regular PCI slots to use old hardware


You can at least get PCIe to PCI adapters. https://www.startech.com/en-us/cards-adapters/pex1pci1




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: