Leap 15.2, KVM/Qemu, Spice, virgl3D with Nvidia - working or not?

I run an Opensuse Leap 15.2 system with a variety of KVM/Qemu VMs. With QXL or virtio video devices and Spice. Remotely with remote-viewer+TLS and virt-viewer + SSH. Locally on the host via sockets. But all without OpenGL acceleration.
My graphics card is a Nvidia 960GTX. I use the proprietary drivers from the Opensuse Nvidia community repository. I know that Nvidia does not support QBM. So, a pure Virgl3D approach with Spice will probably not work on the KVM/Qemu host.
However: There are reports on the Internet that people got virgl3D rendering to work for Ubuntu systems with an EGL-display added in addition to Spice. [FONT=monospace] See e.g. https://nyblnet.blogspot.com/2020/11/virgl-3d-acceleration-on-kvm-with.html[/FONT]

I tried to get a similar configuration running on Leap 15.2. So far, I had no success - neither with Opensuse Leap 15.1 guests, nor with present Kali guests.

The libvirt XML configuration for a guest contains something like

<graphics type='spice’keymap=‘de’>
<listen type=‘socket’socket=’/opt/spices/spice.socket’/>
<image compression=‘off’/>
<gl enable=‘no’/>
<graphics type=‘egl-headless’>
<gl rendernode=’/dev/dri/renderD128’/>
<model type='virtio’heads='1’primary=‘yes’>
<acceleration accel3d=‘yes’/>
<address type='pci’domain='0x0000’bus='0x00’slot='0x01’function=‘0x0’/>

As described by Ubuntu users[FONT=monospace]. The render device access rights were set to 666 for tests. I tried with and without KMS mode enabled for the Nvidia drivers on the host.

For [FONT=monospace]an Opensuse Leap guest I get the following information regarding “drm”: .[/FONT]

[FONT=monospace]linux-h07l:~ # dmesg | grep -i drm
3.088710] drm] pci: virtio-vga detected at 0000:00:01.0
3.088712] fb: switching to virtiodrmfb from VESA VGA
3.091091] drm] virgl 3d acceleration enabled
3.103862] drm] number of scanouts: 1
3.103868] drm] number of cap sets: 1
3.116995] drm] cap set 0: id 1, max-version 1, max-size 308
3.223562] virtio_gpu virtio0: fb0: virtiodrmfb frame buffer device
3.336792] drm] Initialized virtio_gpu 0.0.1 0 for virtio0 on minor 0

However, on the Spice console (virt-manager or remote-viewer) on the host I get the grub menu after that a blinking corsor on a black screen - but after that the sddm login screen does not appear. Only a white ring on a black background.

glxinfo on the guest starts with
[FONT=monospace]linux-h07l:~ # glxinfo
name of display: localhost:10.0
libGL error: failed to load driver: swrast
Error: couldn’t find RGB GLX visual or fbconfig

132 GLX Visuals
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat

0x021 24 tc 0 24 0 r y . 8 8 8 0 . s 4 24 8 16 16 16 16 0 0 None
0x022 24 dc 0 24 0 r y . 8 8 8 0 . s 4 24 8 16 16 16 16 0 0 None

As said, without the EGL-interface the configuration works perfectly for all guests - but without OpenGL acceleration.

Before I invest more time:
Has anybody out there succeeded with KVM/Qemu, Spice, EGL and virgl3D to get OpenGL accelerated on an Nvidia graphics card on an Opensuse Leap 15.2 KVM/Qemu host?
If so: For what guests and what kind of configuration?

Thank you in advance for your answers.


Today’s generation of hypervisors virtualize the CPU, memory and some I/O, but for the most part does not do anything for the GPU. Any special GPU capabilities are strictly a singular effort by the virtualization vendor, because unlike almost all that has been virtualized, few standards exist.

I don’t usually follow discoveries and breakthroughs about graphics,
But from time to time it’s good to do a look around to see what’s happened recently.
KVM and QEMU appear have hardly budged recently.
Best info I’ve found on bleeding edge developments is what has been collected in the following ArchWiki article… the last 2 links more or less point to info that might be helpful with your current approach. You can use the info to verify, maybe supplement the info you’re currently using.

The first 2 links in the above link have seen significant advances over the past 3 years or so, which is if you have an extra graphics card, then you can do a GPU pass-through. The approach is much easier than when modern methods first appeared, and appear to work for most people. Note that this approach like any other pass-through requires multiple GPU cards because when hardware is passed to a Guest, no other Guest and the HostOS itself is blocked from using that hardware. I remember a couple years ago a lot of people were very excited about Looking Glass.

Years ago, I posted the following which points to “live” documentation that is always updated so no matter what virtualization you use and no matter when you click on the links, you’ll get latest, current documentation for how to do a GPU passthrough. Today, if your hardware supports a GPU passthrough, it should be very easy to do.


If you run into another specific problem, post again.


Hi Tsu,
Thank you for your answer. I am familiar with the Arch Linux posts. And I do have (another) machine running with GPU pass-through. I agree that no real standards exist regarding virtualization and GPUs. Personally, I find that a shame. Especially the tendency of Nvidia to create their own standards (e.g. their own realization of OpenGL with their Linux drivers or their latest vGPU technology - together with Red Hat. Not much Open source there … and Red Hat accepts it and nevertheless integrates vGPU with Spice). However, I do not at all agree with your statement that virtualization people do not care about the GPU. I think that quite the opposite is true. Just look at the present cooperation of Red Hat and Nvidia regarding vGPU technology in grids. OpenGL accelerated visualization technology is a key in present scientific applications - see e.g. the car or the chemical/pharmaceutical industry. Especially on virtualized systems … containers as well as KVM/Qemu VMs.

But all of this was not my question.:wink: And, frankly, I do not care much whether KVM/Qemu “budged” or not. :slight_smile: *.
A lot of people have Spice, virgl3D-renderer, OpenGL running - on Red Hat, on Debian, on Ubuntu - with Radeon and Intel graphics cards. (I have seen it on systems with Radeon cards.) Most of these people start Qemu VMs directly on the command line without libvirt, but directly with some qemu-command version. So, in principle virgl3D-rendering is working. With Nvidia being a major problem in the game - as this company refrains from supporting the required methods. But does virgl3D work on a plain Opensuse Leap 15.2 installation?

My question was whether anybody succeeded to get Spice, virgl3D and OpenGL running

  • on Opensuse Leap 15.2,

  • with an Nvidia card,

  • with a seemingly required EGL-interface in addition to the Spice display and the virtio graphics device,

  • with virt-manager or virsh used to start the VM (if possible).

In addition, I find the documentation of Opensuse about Spice sparse. Unfortunately. So, I went to the community.
When I find the time I will test a Virgl3D based configuration on a system with an Intel graphics card instead of an Nvidia card - and maybe a Debian installation in addition to Leap 15.2.
But I had hoped that somebody in the Opensuse community had already tried out whether Virgl3D works on Leap 15.2.


I want to add that with an Opensuse Leap 15.2 client I get the following results regarding the XML-configuration of my first post in this thread:

  • a white screen in virt-manager or remote-viewer at the point of time where the sddm login screen should be presented

In the guest: linux-h07l:~ # dmesg | grep -i drm
2.535402] drm] pci: virtio-vga detected at 0000:00:01.0
2.535406] fb0: switching to virtio
fb from VESA VGA
2.535633] drm] virgl 3d acceleration enabled
2.544749] drm] number of scanouts: 2
2.544754] drm] number of cap sets: 1
2.559621] drm] cap set 0: id 1, max-version 1, max-size 308
2.560446] drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
2.567990] virtio_gpu virtio0: fb0: virtio_gpu
fb frame buffer device

  • [FONT=monospace]In the guest:
    ****linux-h07l:~ # glxinfo
    name of display: localhost:10.0
    libGL error: failed to load driver: swrast
    X Error of failed request: GLXBadContext
    Major opcode of failed request: 151 (GLX)
    Minor opcode of failed request: 6 (X_GLXIsDirect)
    Serial number of failed request: 55
    Current serial number in output stream: 54[/FONT]

Crafting a qemu script isn’t too bad? Likely need to use uefi for a start with the nvidia card, what have you tried in the past with qemu?

I use gpu passthrough here, was an easier solution for me.

You need a later kernel for Virgil3D support?

Did you follow the steps described in the Gerd Hoffman blog from the ArchLinux article I posted?

Some essentials…
Describes requisites, one of which is to be certain you’re running Xorg and not Wayland (unknown if there is a problem with Wayland or not, but in 2016, this was what worked)
Manually connecting a Unix socket to access the virgil3D to libvirt
You may have discovered that the ArchLinux thread is only good for invoking qemu-system-x86 and isn’t sufficient to work using libvirt.

You probably know already, that since all Virgil3D components are upstreamed and integrated into QEMU, there’s no separate source to install… all that should be required is correct configuration and invocation.
And, especially for benefit of others viewing this thread, from what I read this method using Virgil3D should work with any other GPU except Intel(see my ArchLinux reference for setting up a solution for Intel GPUs). I don’t think you should have any problem with your nVidia GPU.

I’d also add that SDDM has a reputation for being “too simple” to do provide more than basic functionality. To be safe, I’d advise installing LightDM and configuring it as your default Window Manager using the alternatives subsystem switcher as follows

update-alternatives --config default-xsession.desktop 


Hi Tsu,
I have tried this long ago, already, with Leap 15.0/15.1, and, yes, I have also tried the original most simple configuration of Kraxl with Leap 15.2. With apparmor active, without apparmor, and, and … :slight_smile:
It is not working - you end up with a screen which tells you “Connected to graphics server” and things stops there. On virt-manager you just get a black screen - as has been reported by many others.

This result is not too surprising as

(1) the original recipe is from 2016, things have meanwhile changed and the recommended configuration today is

**&lt;graphics **type='spice'**&gt;** 
  **&lt;listen **type='none'** ****/&gt;** &lt;!-- Standard socket! --&gt; 
  **&lt;image **compression='off'**/&gt;** 
 **&lt;gl **enable='yes'** /&gt;** 
  **&lt;model **type='virtio'**&gt;** 
**&lt;acceleration** accel3d='yes'**/&gt;** 
  **&lt;address **type='pci'** **domain='0x0000'** **bus='0x00'** **slot='0x01'** **function='0x0'**/&gt;** 

Which today works on other present OS - at least with Radeon and Intel graphics cards.

(2) as Nvidia poses a problem by not respecting the QBM-method used with Virgil3D rendering. Nvidia requires an EGL-interface in addition. Also this point has been stressed by others on the Internet - and it is by the way - true and similar also for using Wayland with Nvidia cards (e.g. on Tumbleweed). See KDE’s recommendation on this: https://community.kde.org/Plasma/Wayland/Nvidia.
As expected for Nvidia you have to install their specific EGL-library “libnvidia-egl-wayland1” to get Wayland working. See also: https://github.com/NVIDIA/egl-wayland and https://github.com/NVIDIA and https://download.nvidia.com/XFree86/Linux-x86_64/460.67/README/installedcomponents.html.

So, I do not agree with your statement that Nvidia does no make any problems. I have read exactly the opposite regarding Virgl3D. Nvidias own drivers - in contrast to the Nouveau driver - realize a very special OpenGL version. believe me, I have followed things about Virgil3D and Nvidia on the Internet over the years.

But, anyway, I will try to get things working on another non-productive Leap 15.2 system

  • with another graphics card (Intel)

  • and/or Nvidia with the Nouveau driver which respects QBM

  • as soon as I find the time for it.And come back with the results. *
    Disregarding Nvidia for a moment:* I agree with your point that things should work without any special requirements or installing separate sources.

Changing the desktop manager may be worth trying. I shall report back on this point.


Malcolm, thank you for your answer.
As I said in another comment - I have a GPU-passthrough working on another system. I just wanted to try a “simple” Virgl3D setup. But you are right - I should experiment a bit with direct qemu commands. I remember having read a post somewhere that Qemu with SDL worked with Virgl3D on an Opensuse Leap 15.?? system - though not with Spice at that time.
Not quite sure about the kernel version - the guy who got things running on Ubuntu probably worked with Ubuntu 20 => Kernel 5.4. I’ll give it a try with Leap 15.2 or Tumbleweed after Easter.

I assume you’ve viewed the Virgil3D page (BTW -note the spelling which is different than what you are routinely doing)


They have a mailing list which might be able to point you in a particular direction based on your specific details.

Looks like a very interesting project and they seem to be confident that even the component versions in LEAP should work well.