This is just a nice to have, but would like to use the third GPU’s cuda cores when I don’t need gpu passthrough.
With just the one Nvidia GPU using the driver and the other using vfio-pci all is good, can access the cuda cores as required. When using both cards with the nvidia driver it all heads off into lah-lah-land, I can boot to multi-user and all is fine, driver loading etc, just not graphical target… Note: The nvidia driver is installed with the no-opengl-files and cuda with no-opengl-libs.
The intel GPU is driving primary displays via Display Port and DVI-D. Nvidia GPU 1 is in a PCIe x16 slot, Nvidia GPU 3 is in a PCIe x1 slot.
Solved… was the power to the 1x slot, moved the card over to the 4x slot and all is good now, can switch one card to vfio-pci when required for gpu passthough and use both gpu’s for cuda when required…
I seem to remember many, many years ago in these Forums there was a thread about multiple nVidia GPUs,
Of course don’t know if any of that is still relevant today, but I remember that when there are multiple nVidia GPUs there is/was a special communications channel that could be used for communications between the GPUs. Without remembering the specifics, I’d guess today that it would have been a hardware/wired connection bypassing board hardware.
Probably SLI? I’m not using the graphics output on the host (jus passthrough) since I have not installed the opengl files with the nvidia driver, I guess if I did can set the multigpu option in the Xorg device config file…
If I had more CPU cores could run multiple systems… I’m contemplating getting a RX570 to see how that goes else a GTX1650 both around the same price…