Hacker News new | past | comments | ask | show | jobs | submit login

I'm running the kind of eclectic setup that expansion slots were meant for. A board that has three full length PCI-E slots in it, with three video cards.

Running a multiseat machine (i.e. truly independent login stations) pretty much requires a video chip per seat.

It's been an interesting ride. Graphics card fans weren't really meant for 24x7 operations. After a recent power failure, gummed up lubricant caused one not to start again, and the card suffered heat death. Also multi-seat login was abandoned, i.e. broken and tmk never fixed, in GDM, so I have to use LightDM with no alternatives. Also there were stability issues with nVidia cards, but three Radeon cards work fine.

Possibly with the latest hardware, a GPU-per-seat setup could be done with Thunderbolt? Anyway meanwhile we soldier on on the cheap, with a circa 2012 vintage ultra-high-end gamer machine still providing adequate compute power for all the screens in the house.




It is unfortunate that the consumer grade GPUs cannot be shared in Proxmox/ESXi/UNRAID like you can with the "pro" level cards. One of the four major benefits of going with RTX A5000 cards over 3090's was that I can share one or more GPUs amongst several virtual machines, i.e. Shared Passthrough and Mutl-vGPU.


https://github.com/Arc-Compute/LibVF.IO/tree/master/ plus https://github.com/gnif/LookingGlass works pretty well. If you use an Intel GPU, particularly one of their new Arc dedicated GPUs, it supports the functionality on the consumer grade hardware without any trickery and you just need Looking Glass to map the outputs.

If you really want multiple GPUs though you can also use a normal 1-in-4-out type PCIe switch and save a lot of cost on Thunderbolt components in-between. Low bandwidth ones are particularly dirt cheap due to crypto mining volume. Keep an eye out for ACS support or you may have to use an ACS override patch though.


> If you use an Intel GPU, particularly one of their new Arc dedicated GPUs, it supports the functionality on the consumer grade hardware without any trickery and you just need Looking Glass to map the outputs.

Does this work yet? Last time I looked, my understanding was that SR-IOV is supposedly supported on Arc but the software doesn't exist yet, and might not for some time.


I don't think it's due for upstream until next year. You can pull it and mess with it now though if you're willing to buy the GPU from eBay or China. I haven't seen any US/Europe retail postings yet. For most it's fair to say it's not actually available yet though.

I'm sad they got rid of GVT-g with the change though. SR-IOV is definitely a nice add but it has downsides on the resource sharing. Undoubtedly GVT-g was just considered too unsafe and too niche to keep though.


Interesting stuff, but why share the hardware between multiple virtual machines instead of regular Linux users? Each video output port should have its own X or Wayland session.


Typically this is done for Windows guests.


> Graphics card fans weren't really meant for 24x7 operations.

Are those seats highly active all the time? If not, plenty of GPUs have fans that shut off when idle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: