VGA PCI Passthrough guide on openSuSE Leap 42.2

Because I could not find a PCI passthrough guide made for openSuSE, but only for some other distros, and because there are differences which might discourage newer openSuSE users from setting up a gaming VM with PCI passthrough, I decided to make one myself. I hope you’ll find this usefull:

Notes
I’ve successfully set GPU passthrough on my PC - i5-3570 CPU, Asus P8H77-V LE motherboard, one Radeon R7 250 (for the virtual machine) videocard and one nVidia 750 Ti (for the host) - running openSuSE Leap 42.2 kernel 4.4.36-8-default.
Although I’ll try to make a guide that anyone can follow, I’ll assume that the reader has at least some Linux knowledge.

Prerequisites

  • you need at least 2 video cards - one will be used by the Linux host and the other one will dedicated to the virtual machine (let’s use VM from now on) that’s running Windows.
  • you can’t use a Intel integrated GPU for the VM, you can use only AMD or nVidia. You can use Intel integraded GPU for the host, though.
  • you need a CPU and a motherboard that supports VT-D (Intel) or AMD-Vi (AMD) - a list can be found <a href=“https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware” target="_blank">here</a> - and you need to enable it in BIOS .
  • the PCI root port should not be part of the same IOMMU group as the GPU you want to use with the VM (that will be explained more later).
  • a secondary display or free input port on your primary display to be used by the VM.
  1. Install software
zypper in libvirt libvirt-client libvirt-daemon virt-manager virt-install virt-viewer qemu qemu-kvm qemu-ovmf-x86_64 qemu-tools

Also, download VirtIO drivers from here.

  1. Enable IOMMU
    IOMMU is a generic name for Intel VT-x/Intel and AMD AMD-V/AMD-Vi. Enable it by setting intel_iommu=on (for Intel CPUs) or amd_iommu=on (for AMD CPUs) bootloader kernel option. To do this, edit /etc/default/grub and add intel_iommu=on or amd_iommu=on to GRUB_CMDLINE_LINUX_DEFAULT options. Save the file and regenerate grub by executing
grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot and execute

dmesg|grep -e DMAR -e IOMMU

to verify that IOMMU is enabled properly.
http://i.imgur.com/EJTyZmv.png

If you have a similar output, IOMMU is enabled.

  1. Make sure that IOMMU groups are valid
    Run this script (I’ve named it lsiommu):
#!/bin/bash
shopt -s nullglob
for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*}; n=${n%%/*}
  printf 'IOMMU Group %s ' "$n"
  lspci -nns "${d##*/}"
done;

The result should be an output like this:

http://imgur.com/4kuajgc.png

Here you can find 2 things: what IOMMU group the device you want to use for the virtual machine is (in my case, group 11) and its vendor/model ids (in my case, 1002:683f and 1002:aab0). This guide will work only for devices which are alone (ignoring their associated audio device) in their IOMMU group.

  1. GPU isolation
    We need to isolate the GPU. This is pretty easy if you have 2 GPUs from different manufacturers (like me - just blacklist the module for the GPU you want to use in VM), but a bit more complicated if both GPUs are from the same vendor. And a lot more complicated if both GPUs are the same model. This guide can be followed as long as both your GPUs are NOT the exact same model, even if they are from the same vendor. It will NOT work with 2 identical GPUs.
    First, create a file named
/etc/modprobe.d/gpu-passthrough.conf

. Edit it and insert:

options vfio-pci ids=**_your_GPU_ids_**,**your GPU's audio device id**]

in my case, that was:

options vfio-pci ids=**1002:683f**,**1002:aab0**

Please note the ids. Edit again /etc/default/grub and add rd.driver.pre=vfio-pci to GRUB_CMDLINE_LINUX_DEFAULT options. Save it and regenerate grub.

  1. Rebuild initrd
    You need to rebuild the initial ram disk to include all the needed modules. Create a file named /etc/dracut.conf.d/gpu-passthrough.conf and insert this line into it:
    add_drivers+="pci_stub vfio vfio_iommu_type1 vfio_pci vfio_virqfd kvm kvm_intel"
    Rebuild now the initrd by executing
dracut --force /boot/initrd $(uname -r)

Please pay attention, because if you did something wrong, this might make your Linux unbootable.

  1. Reboot and check that your GPU was isolated
    After you’ve rebuild you initrd, check if your GPU was isolated; run lspci -k and look for **Kernel driver in use: **for the GPU that you want to isolate (use with the VM). It should state vfio-pci. Example:
02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde PRO [Radeon HD 7750/8740 / R7 250E]
  Subsystem: PC Partner Limited / Sapphire Technology Device a001
  Kernel driver in use: vfio-pci
  Kernel modules: radeon
02:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series]
  Subsystem: PC Partner Limited / Sapphire Technology Device aab0
  Kernel driver in use: vfio-pci
  Kernel modules: snd_hda_intel
  1. Define a VM
    First, open /etc/libvirt/qemu.conf, find the nvram option and edit it like this:
nvram = 
  "/usr/share/qemu/ovmf-x86_64.bin:/usr/share/qemu/ovmf-x86_64-code.bin"
]

Restart libvirt daemon:

systemctl restart libvirtd

Start Virtual Machine Manager and create a new machine.
Select “Local install media” and “x86_64” for architecture options:
http://i.imgur.com/Sf04FVi.png
Select the Windows installer ISO image:
http://i.imgur.com/e5uHIM2.png
Chose how much memory and how many CPU cores your VM will have:
(because there is a limit for the number of images a post can contain, all the other images can be found here)
Create (or select one if you already have one defined) a disk image for your VM (it can even be a whole disk - just put its path there and make sure it’s not mounted in the host):

Name your VM, make sure you’ve checked “Customize configuration before install” and click on Finish:

You’ll be taken now to VM configuration screen. On Overview page, select UEFI firmware and i440FX chipset (q35 did not work for me):

On Processor page set “host-passthrough” for your processor (if it’s not in the list, just type it):

Click on “Add Hardware” button and add a SCSI controller (make sure to change the type to VirtIO SCSI for improved performance):

You’ll have a IDE disk by default. Change that to SCSI:

Add your isolated GPU - if it has a sound device, like mine, and if you have sound output (via HDMI or Display Port) on your display, add the soundcard (in my case, PCI device 02:00.1) as well:

Add a keyboard and a mouse - would be better if you have an extra pair as the input will be grabbed by the VM once it starts:

Remove all unnecessary devices - Tablet, Display Spice, Console, Channel spice, Video QXL. You’ll need a soundcard if your display does not have sound output or if you’ll connect it via VGA. You can buy a cheap USB sound card - you can find one of those for 10$ - and add it like you’ve added the keyboard and the mouse:

Connect your the video card that you’ve just assigned to the VM to a display and click on “Begin Installation”. If you did everything right, you should see the VM booting on that display.

  1. Install Windows. You’ll need the VirtIO drivers downloaded at step 2 for the HDD (can be found under vioscsi directory).

  2. After Windows is installed, you can install Synergy which is really amazing - with it you can use the same keyboard and mouse on multiple PCs without a physical KVM (Keyboard-Video-Mouse) switch.

Very cool post.

Some suggestions…

  • You need to identify the virtualization technology you’re using in the subject line and if you do a full write-up, in the title. In your case, it’s KVM.
  • Instead of installing your list of packages, you should use the “YAST > Install virtualization” as your starting point, because it’s the recommended method and standard starting point to install KVM or Xen because not only are all desired packages are installed, there is some additional configuration including setting up a default virtual network using a Linux Bridge Device (typically named br0).
  • When you create a full write-up, you should be sure to state the current date and likely include a warning that you’re describing a topic which is considered bleeding edge. Although the steps you describe are more mature than previous procedures, it’s anyone’s guess how long what you describe will be effective so the reader needs to know in the future whether your procedures will then still be current or superceded by something else.

Good stuff,
TSU

Thanks :slight_smile:

Some suggestions…

  • You need to identify the virtualization technology you’re using in the subject line and if you do a full write-up, in the title. In your case, it’s KVM.

You’re right, sadly I can’t (or I can’t find how) edit the title.

  • Instead of installing your list of packages, you should use the “YAST > Install virtualization” as your starting point, because it’s the recommended method and standard starting point to install KVM or Xen because not only are all desired packages are installed, there is some additional configuration including setting up a default virtual network using a Linux Bridge Device (typically named br0).

I’m not 100% sure, but I think that qemu-ovmf-x86_64 and **qemu-toold ** will not be installed that way. However, I agree with you - It would be better to install virtualization via YAST and, if still needed, install those 2 extra packages with zypper. But I can’t edit the post anymore.

  • When you create a full write-up, you should be sure to state the current date and likely include a warning that you’re describing a topic which is considered bleeding edge. Although the steps you describe are more mature than previous procedures, it’s anyone’s guess how long what you describe will be effective so the reader needs to know in the future whether your procedures will then still be current or superceded by something else.

Good stuff,
TSU

I will take note on that in the future. Thanks for your feedback.

And one extra note:
there is an error on step 7. the correct content should be:


nvram = 
  "/usr/share/qemu/ovmf-x86_64.bin:/usr/share/qemu/ovmf-x86_64-code.bin"
]

(note the missing )

Don’t worry about trying to edit a Forums post,
Nothing in these Forums should be considered any more than what is “Best of knowledge” at the time the post was written, even when better facts appear seconds later, that is why these are “Discussions.”

If at some time you want to create something <really> authoritative, you might create an SDB (Google all the SDBs that exist). Those are supposed to be written with extra care and become reliable references intended to stand the test of time.

But, for those things that might not stand the test of time or if you just would not prefer to create something with that kind of authoritative standing you can always create a Wiki page of your own subject to hardly any guidelines. So, for example I’ve written numerous articles of this type which range from how to install application solutions, frameworks, individual tools and a lot more. You’re welcome to peruse what I’ve written to provide some ideas how you might create your own article (I don’t follow any particular format but most are based on well known “best practices” in project and business management in general).

https://en.opensuse.org/User:Tsu2

To start your own Wiki, see the signature to this Post.

BTW - You might also be interested in scripting some of the procedures you describe. How to do that is sprinkled through many of my articles as well.

TSU

This should be " pci_stub … kvm_intel " - note leading and trailing spaces. This is important because spaces are used to separate individual names. Your example will work as long as it is the only assignment or everyone else follows the rule to add spaces around value.

Add your isolated GPU

Screenshots for **this **would probably be more useful than standard OS selection which is not specific to PCI passthrough in any way.

You may consider asking moderator to move this post to How To/FAQ section. I believe it should be possible to edit it then. Alternative is of course adding this information to common openSUSE wiki.

You know maybee how to avoid the Error code 43 on Nvidia Cards?

I’ve found this: VFIO tips and tricks: VFIO+VGA FAQ (go to Question 10). Based on that, you won’t get error 43 if you use a driver older than 337.88, there is a workaround for 337.88 and 340.52, and no driver newer than 344.11 will work without performance penalty (which kind of defeats the purpose of PCI passthrough). But that page is a bit old, maybe there is a way to make the newer drivers work. If not, use an older one.

@arvidjaar](https://forums.opensuse.org/member.php/69818-arvidjaar)
you’re right, there should be leading and trailing spaces there. I will create a wiki, as tsu2 suggested; that way I can update it when needed.

Indeed, as this is NOT a request for help, but a How To, it shouldn’t have been her in the first place. But the Unreviewed How To forum has the same rules as here. No editing.

This is CLOSED for the moment.

Moved from Virtualization and open again.

some additional comments regarding the procedure:
Hardware requirements:
The number of pci-lanes is important. A CPU supporting only 16 lanes could limit the two gpu if there are aditional devices which require pci-lanes, i.e. certain M2-SSD.
Also not every motherboard is useful. My old machine gave the host gpu PCI x16 and the guest gpu PCI x4 which is not sufficient (both PCIe slots were build and sold as “x16”). There was nothing I could do to change things from BIOS or OS side.

It was mentioned already to use Yast2. I recommend it as well, since you can easily set up a network bridge with it (in contrast to Ubuntu). This is useful if you want to run more than one VM at the same time connected to the WWW.

Legacy Windows: WIn XP and Win 7
One of the reasons why I got interested in this - I wanted to run my old XP games in a VM.
The above procedure works for WIn >=8, since it is UEFI-based. However, for older Windows systems one needs to use Seabios instead of the OVMF. In addition, the gpu need to be switched to VGA-mode as outlined in
http://vfio.blogspot.de/2015/05/vfio-gpu-how-to-series-part-5-vga-mode.html
It describes the use of a small script, i.e. called qemu-kvm.vga which I had to place in /usr/bin/qemu-kvm.vga in order to get it to work in Leap 42.2
Apart from that I would heavily recommend to read the following sites inside out
http://vfio.blogspot.de/2015/05/vfio-gpu-how-to-series-part-4-our-first.html
http://vfio.blogspot.de/2015/05/vfio-gpu-how-to-series-part-3-host.html

It helps to understand the background and is pretty detailed at all steps. Although written for Fedora, I could easily adapt it to Leap 42 and Tumbleweed as well. Ubuntu as host system gave me a lot of headaches, but it worked also in the end. Fortunately apparmor in LEap did not cause any problems during the implementation.

On last thing: It works even with 3 gpu - provided you have enough pci-lanes (40 in my case). So I run a WIndows XP on one screen/gpu, while having Netflix running on Ubuntu on a second screen/gpu, all taken care of by the host Leap 42.2 on the 3rd gpu/screen. One can easily compartmentalize Office, Gaming, Media, etc
The hardware, which I run this on: E5-1650v3; ASUS X99-E WS; GTX750 as host card, GTX 960 (Windows and Linux) and AMD R5 230 (Linux only) as guest gpu.

I followed the directions and I do not get an error but I get no display on my secondary graphics card. I also enabled the console within Virtual Manager and it gets nothing other than a blinking cursor.

At least for awhile,
The links provided in this Forum post (May 20 2017) might be helpful for whatever virtualization you’re using

https://forums.opensuse.org/showthread.php/524942-GPU-passthrough-Various-virtualization-technologies

As noted at the link,
No matter what virtualization you’re running, this topic is fast changing and maybe valid for no more than 6 months at a time.

TSU