Hacker News new | past | comments | ask | show | jobs | submit login

One thing I want to figure out (because I don't have a dedicated Windows gaming desktop), and the documentation on the internet seems sparse: it is my understanding that if I want to use PCIe passthrough with Windows VM, these GPUs cannot be available to the host machine at all, or technically it can, but I need to do some scripting to make sure the NVIDIA driver doesn't own these PCIe lanes before open Windows VM and re-enable it after shutdown?

If I go with vGPU solution, I don't need to turn on / off NVIDIA driver for these PCIe lanes when running Windows VM? (I won't use these GPUs on host machine for display).




> One thing I want to figure out (because I don't have a dedicated Windows gaming desktop), and the documentation on the internet seems sparse: it is my understanding that if I want to use PCIe passthrough with Windows VM, these GPUs cannot be available to the host machine at all, or technically it can, but I need to do some scripting to make sure the NVIDIA driver doesn't own these PCIe lanes before open Windows VM and re-enable it after shutdown?

The latter statement is correct. The GPU can be attached to the host but it has to be detached from the host before the VM starts using it. You may also need to get a dump of the GPU ROM and configure your VM to load it at start up.

Regarding the script, mine resembles [0]. You need to remove the NVIDIA drivers and then attach the card to VFIO. And then the opposite afterwards. You may also need to image your GPU ROM [1]

[0]: https://techblog.jeppson.org/2019/10/primary-vga-passthrough...

[1]: https://clayfreeman.github.io/gpu-passthrough/#imaging-the-g...


Exactly. With GPU virtualization the driver is able to share the GPU resources with multiple systems such as the host operating system and guest virtual machine. Shame on nvidia for arbitrarily locking us out of this feature.


Got some time to try this now. It worked as expected, I have vgpu_vfio. However, it doesn't perfectly fit my needs. Particularly, my host system is "heavy", I need it to run CUDA etc, while the VM just to run games. However, it seems the 460.32.04 driver on host doesn't have full functionality, hence, cannot run CUDA on the host any more.


Is there info on this sort of usage? I'd love to use the host for NVENC and a VM guest for traditional GPU stuff, but haven't been able to find anything on doing that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: