Low performance on win11 VM with QEMU/KVM

Laptop Acer Aspire A315 with i5 2.4 GHz, 16 Gb RAM and NVMe drive.

I have always seen problems with performance in comparison to bare-metal installations on hosts with exact same hardware: Start Menu “animation” looks in general slightly “lagged”, any program such as file manager, web browser, MS Office take a bit to open, CCleaner takes at least a minute or two to do even a couple of Mb cleaning, and most notably, cumulative updates from Windows update take more than 2 hours from the moment of start downloading to the moment of finishing rebooting. This last thing, on bare metal, takes half an hour.

This is how I run the VM:

qemu-system-x86_64 \
-name "Win11" \
-machine type=pc-q35-7.1,accel=kvm \
-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
-smp 4,sockets=1,cores=2,threads=2 \
-m 8G \
-drive if=pflash,format=raw,readonly=on,file=/usr/share/qemu/ovmf-x86_64-ms-4m-code.bin \
-drive if=pflash,format=raw,file=virtuals/ovmf-x86_64-ms-4m-vars.bin \
-boot menu=on \
-netdev tap,br=br0,helper=/usr/lib/qemu-bridge-helper,vhost=on,id=n1 -device virtio-net-pci,netdev=n1,mac=52:54:00:12:34:56 \
-rtc base=localtime \
-vga qxl -device virtio-serial-pci -spice unix=on,addr=virtuals/vm_spice.socket,disable-ticketing=on -chardev spicevmc,id=spicechannel0,name=vdagent -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-display spice-app \
-audiodev spice,id=xyz -device ich9-intel-hda -device hda-output,audiodev=xyz \
-device virtio-balloon-pci \
-device qemu-xhci,id=xhci1 -chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \
-monitor stdio \
-drive file=virtuals/win11.qcow2,index=0,media=disk,format=qcow2,discard=unmap,id=disk1,if=none -device virtio-blk-pci,drive=disk1,bootindex=1,physical_block_size=4096,logical_block_size=4096 \
-chardev socket,id=chrtpm,path=virtuals/tpm/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-tis,tpmdev=tpm0

Of course I’m using virtio drivers, and as seen here I try using some optimizations for CPU and disk

Is someone here experiencing similar symptoms like here?
Are these performance problems expected with a Windows VM?


@F_style Hi, not here but I have 4 cores/2 threads per core and 32GB of ram allocated it’s also running off a nvme storage device… It’s just using the virt-manager install.

Now this is on a Workstation desktop with a Xeon, which is designed for this type of scenario, I suspect since your system is a consumer device (cpu) there may be some missing optimizations/flags on the cpu…

After checking that rig here has Intel i5 1135g7 2.4 GHz with 4 cores and 8 threads, I modified running script with “-smp 8,sockets=1,cores=4,threads=2”, and the result is exactly the same…

What other optimizations does CPU need?
Why does it seem that win11 VMs run faster on VirtualBox? Hell, I really don’t want to go back to that thing…

@F_style comparing your qemu output with what I see in the xml files, there are some specific cpu optimizations used, but I think you need to look at the qemu info as to what you can run, based on your cpu and the cpu flags (the -cpu host line).

Those “specific CPU optimizations” are already these:
-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time
These are the optimizations suggested -by many communities and even authorities- for Windows virtual machines.

What else could you have in your settings?
And why XML? Is it that you use libvirt?

@F_style yes, makes it easier to access over the network with the virtual-client;

-cpu Broadwell-IBRS,vme=on,ss=on,vmx=on,pdcm=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff

Ok, just checked a similar win11 VM, but on VirtualBox (latest version) and on a laptop lower than the one I have…
The VM works exactly as bare metal!
This is not fair!

What am I doing wrong with QEMU?

Why do you use Broadwell instead of just “host”?
Do you really use all those CPU enhancements?
I read that CPU pinning could also boost performance, but as much as I search internet, there’s no single result which tells how on earth to do without XML, but pure QEMU commands…

Meanwhile others also suggest to disable timers, but I don’t know know to do that

@F_style that is what installing with virt-manager wizard configures since it is the broadwell era of cpu

I have no issues with needing cpu pinning, don’t even notice it’s running… like I indicated, this is a Workstation Xeon CPU E5-2695 v4 18 Cores / 36 Threads, 128GB of ECC RAM…

You don’t use CPU pinning…
And a lower laptop runs win11 vm just like bare metal, but with virtualbox… Why is this?

@malcolmlewis :
All those cpu optimizations/hypervisor enlightenments you use, did virt-manager installer automatically put/suggest them for you based on your host cpu, or did you add them all manually?

And if the first, is there a way to make virt-manager/libvirt/virsh list all recommended hyper-v features based on host cpu?

@F_style no those are just what libvirt used, no user intervention. Not that I’m aware off, it’s just qemu running…

You are not alone. Mine also runs horribly and CPU optimizations don’t help. Must be something broken with qemu. The difference is I’m on AMD.

Finally I tried this QEMU setting:
-cpu host,hv-passthrough
What the “hv-passthrough” parameter does is enabling all available hypervisor enlightenments, overriding other hv-* ones if there are.

Finally a noticeable speed boost.
I tested when installing the ugly updates of Windows’s Patch Tuesday. From the moment of beginning to download updates to finishing rebooting, it took close to 3 hours before; now with this it took 1.5 hours.
Also, VM’s usage felt overall slightly better, more usable indeed.

Still it takes double time compared to bare metal, but probably this is the best one can do with consumer rig…
Oh, and as per QEMU’s documentation, these CPU settings prevent safe VM live migration, but since I don’t need that at least for the time being…

Just posting this in case it’s useful for someone else.