GPU passthrough - Various virtualization technologies

Although this post starts a new thread, it’s also in response to another thread in the Applications Forum

Some references I consider best as of today.
As I described earlier, hardware pass-through(particularly GPU) in virtualization is very new, but is possibly arriving at some stability based on standards like IOMMU and VFIO only within the past half year or so (as of today). So, you’re probably looking at this at a good time when everything isn’t “The Wild West” when individuals were pushing into unexplored territory, and finally when everyone is generally working on, and improving the same things. But, it’s still so early everywhere that there may still be unknown wrinkles that need to be addressed. So, besides what I’ll post here, be sure to join communities supporting the specific procedures you follow. And, only follow guides written as recently as possible, preferably within the past 6 months or so.

In any case, this is probably a good time to post something that could be considered a milestone reference for people looking at this for at least the next half year or so.

List of IOMMU supported hardware


From one of the architects of IOMMU
Most comprehensive reference
Debian reference, probably all applicable except installation


As usual, VBox is a bit behind the others without a clear guide for enabling GPU pass-through. The following link to the VBox Advanced Configuration documentation, specifically PCI passthrough in general and with a section on passing through a webcam

As I noted in my prior post,
If you consider a non-virtualization technology like docker or LXC, you likely wouldn’t have to deal with virtualized devices and pass-through at all because Linux containers always have direct access to hardware devices. I haven’t researched this approach but I do suspect you would only need to assign resources using tried and true common methods that have been around for many years.

Particularly for this technology, also search for GPU topics related to your chosen virtualization in this openSUSE Technical Help -Virtualization forum for related information that might cover developments and issues not mentioned in the above references.


I have it all working fine here with Intel DQ77MK Motherboard and Nvidia gpu’s… now in saying that the Nvidia gpu’s are not used for driving any video on the host, purely vfio-pci and cuda (installed manually with --no-opengl-files).

 /sbin/lspci -nnk | egrep -A3 "VGA|Display|3D"
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller [8086:0152] (rev 09)
    DeviceName:  CPU
    Subsystem: Intel Corporation Device [8086:2035]
    Kernel driver in use: i915
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208B [GeForce GT 710] [10de:128b] (rev a1)
    Subsystem: ZOTAC International (MCO) Ltd. Device [19da:6326]
    Kernel driver in use: nvidia
    Kernel modules: nouveau, nvidia_drm, nvidia
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208B [GeForce GT 710] [10de:128b] (rev a1)
    Subsystem: ZOTAC International (MCO) Ltd. Device [19da:5360]
    Kernel driver in use: vfio-pci
    Kernel modules: nouveau, nvidia_drm, nvidia

The how-to I followed was based around this one:

The following is for gpu-z to work in windows 10;

cat /etc/modprobe.d/10-kvm.conf

options kvm ignore_msrs=1

My vfio configs including card alias added;

cat /etc/modprobe.d/11-vfio.conf 

alias pci:v000010DEd0000128Bsv000019DAsd00005360bc03sc00i00 vfio-pci
alias pci:v000010DEd00000E0Fsv000019DAsd00005360bc04sc03i00 vfio-pci
options vfio-pci ids=10de:128b:19da:5360,10de:0e0f:19da:5360
options vfio-pci disable_vga=1

cat /etc/modules-load.d/vfio.conf 


The grub options;

cat /etc/default/grub | grep GRUB_CMDLINE_LINUX_DEFAULT

GRUB_CMDLINE_LINUX_DEFAULT="splash=silent scsi_mod.use_blk_mq=1 intel_iommu=on quiet"

YaST lan setup…


ip link show (with vm running)

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
5: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP mode DEFAULT group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

Other notes;

When rebooting host ensure vm screen disconnected (maybe my system setup) but it still heads off into lah-lah-land…

I use Synergy but still need a keyboard connected for the windows UAC, I set to 'Notify me only when apps try to make changes… (do not dim my desktop).

On Linux guests had to add serverHostname= to ~/.config/Synergy/Synergy.conf file to auto connect!

My qemu start script;


#Create initial image via qemu-img create -f raw -o preallocation=full os151passthru.raw <disk_sizeG>

qemu-system-x86_64 \
-m 4G \
-cpu host,kvm=off \
-smp 2,sockets=1,cores=2,threads=1 \
-rtc clock=host,base=utc \
-serial none \
-parallel none \
-vga none \
-nographic \
-usb \
-device usb-host,vendorid=0x05af,productid=0x0808 \
-device vfio-pci,host=02:00.0 \
-device vfio-pci,host=02:00.1 \
-drive id=disk0,if=virtio,cache=none,format=raw,file=/var/lib/libvirt/images/os151passthru.raw \
-drive if=pflash,format=raw,file=/usr/share/qemu/ovmf-x86_64-4m-opensuse-code.bin \
-machine type=pc-q35-4.0,accel=kvm,kernel_irqchip=on \
-nic tap,ifname=tap0,script=no,downscript=no

# Following used at install time
#-drive file=/stuff/iso_images/openSUSE/openSUSE-Leap-15.1-DVD-x86_64.iso,index=1,media=cdrom \
#-boot order=dc \



Hello. I’m new here.

I’ve got the NVIDIA drives from community repo and installed the suse-prime package to swith graphic cards.
I tried to do gpu passthrough in virt-manager (kvm) on Windows10 VM for gaming porpuses.
When powering up the machines dont boot.
Anyone tried this before? Sorry if placed the reply in wrong place.
Thanks in advanced.

This is a laptop? Your system is not designed to operate this way unless you try hooking up a second monitor, keyboard and get it going out the HDMI, I tried with a laptop with dual AMD cards and was not successful, desgined for desktop systems and separate graphics cards.

Thanks for replying. Yes, is a laptop.
I found this for optimus nvidia gpu but is using bumblebee.

So is this what your following? What is your laptop model? Are you running Tumbleweed? Your better to start a new thread on your specific system/issues.

Welcome to these openSUSE Forums.
You should know though that attaching your new issue to an existing thread that was started by someone else is considered bad form because later people may confuse responses to the first issue and later responses to your later issue.
Start a new thread whenever you have a question.

Because this thread started off more as a neutral “Information” post, your addendum probably won’t cause much confusion.

Optimus laptops and I suspect all laptops with an nVidia or Radeon GPU should be configurable to support GPU pass-through(The Intel CPU probably should be configurable for the other use, usually the main system).
The other configuration I’m aware of is where a person cracked open his laptop and attached an external GPU and display to the m.2 connector.

You will need to describe in detail the GPU in your machine, the commands you ran and your results, especially any errors that display. And, of course the guide you are attempting to follow if it’s different than what you already posted.


Thank you both for the assist.
I will create another thread :slight_smile: