Confused about the difference between many different OVMF.bin files that come with KVM Installation.

Hi all,

After my successful migration to OpenSUSE Tumbleweed over the course of the past year or so, I have decided to now migrate to KVM/QEMU with Virtual Machine Manager instead of VMWare Workstation Pro that I have been using all along. This is mainly due to the performance improvements of the KVM VMs that I have witnessed, as well as getting tired of having to fiddle with Workstation Pro to get it to behave as expected every time Tumbleweed gets a new (kernel) update.

I am currently in the process of converting all of my daily (Mostly Windows) VMs which were made on VMWare Workstation Pro, and the overall process has been going just fine so far. However, I am a bit confused about the process of converting VMWare VMs that are using the UEFI firmware to KVM.

My qemu-img converted virtual images import, and boot just fine under a new KVM Virtual Machine Manager VM when I choose any of the below firmware choices under VMM > VM’s Virtual Hardware Details > Overview > Hypervisor Details > Firmware:

  1. UEFI x86_64:/usr/share/qemu/ovmf-x86_64-smm-ms-code.bin
  2. UEFI x86_64:/usr/share/qemu/ovmf-x86_64-smm-code.bin
  3. UEFI x86_64:/usr/share/qemu/ovmf-x86_64-4m-code.bin

But my questions are:

  1. Why the three files? what are the differences between each, if any? (Did my googling but was more confused by what I found).
  2. Which one works better with Windows (7 & 10) VMs converted from VMWare Workstation Pro? (In my limited testing they are all performing identically so far).
  3. Why do I get a system reboot (instead of) the first shutdown command on each of the above UEFI files? is it by any chance related to the VM or my selection of the UEFI bin file? (The first system shutdown after the initial system boot under KVM reboots the system instead, the second attempt shuts it down though. Subsequent attempts work the first time. Tried it with all three UEFI files so far. The same does not happen with converted VMs which use BIOS firmware.)


“smm” variant includes support for System Management Mode. “ms” variant includes Microsoft certificates for Secure Boot which allows you to enable secure boot in guests and use normal binaries that are signed by Microsoft. 4M is complied with large pflash size that’s all. I usually use ms-4m (mostly because I am intersted in testing secure boot as well). I missed smm variant so far.

Which one works better with Windows (7 & 10) VMs converted from VMWare Workstation Pro? (In my limited testing they are all performing identically so far).

All of them have the same firmware. I do not have experience with SMM so cannot comment on it. 4M variant simply gives you more space for EFI variables which probably does not really matter. And “ms” variant enables secure boot by default, that’s all.

Why do I get a system reboot (instead of) the first shutdown command on each of the above UEFI files?

I have not seen it with Linux guests and in any case, I do not use libvirt, I call qemu directly, so I let someone else comment on it.

Thank you so much, this was very clarifying!

In 15.2 we have additional:

ovmf-x86_64.bin ?
ovmf-x86_64-code.bin ?
ovmf-x86_64-opensuse.bin Include openSUSE secure-boot keys
ovmf-x86_64-suse.bin Include SUSE secure-boot keys
ovmf-x86_64-vars.bin ?


The *-code.bin files are the UEFI firmwares.

The *-vars.bin files are corresponding variable store images that can be used as a template for per-VM non-volatile store. libvirt copies the specified vars template to a per-VM path under /var/lib/libvirt/qemu/nvram/ when first creating the VM.

Files without code or vars in the name can be used as a single UEFI image. They are not as useful since no UEFI variables persist across power cycles of the VM.

The -ms.bin files contain Microsoft keys as found on real hardware. Therefore, they are configured as default in libvirt.

Likewise, the -suse.bin files contain preinstalled SUSE and openSUSE keys.

There is also a set of files with no preinstalled keys.

I don’t understand the description of the *vars.bin file.
I thought UEFI settings are saved in the .fd file (that is specified in the VM libvirt XML file).
And what is the difference between the *code.bin and “There is also a set of files with no preinstalled keys.” (whish I assume is e.g. ovmf-x86_64.bin)?

I run qemu direct along with the code and var files copied out of the default location, the code files are set readonly, vars == fd file.

-drive if=pflash,format=raw,unit=0,file=$QEMU_VARS_PATH/$QEMU_CODE,readonly=on \
-drive if=pflash,format=raw,unit=1,file=$QEMU_VARS_PATH/$QEMU_VARS \

I mostly setup a VM via the GUI. I occasionally do it with “virt-install” mainly to allow me to select a different OVMF file.

As I understand it:

name-code.bin: the actual firmware code that runs as emulated BIOS.

name-vars.bin: the initial data (default BIOS settings) used by that code. It is copied to a file in “/var/lib/libvirt/qemu/nvram” so it becomes the NVRAM for that virtual machine.

name.bin: this includes both parts in a single file, but I’m not sure how that works if you are wanting to change NVRAM. Best to use the separate files (code/vars), in my opinion.

What’s in a name? The file name absolutely does not matter; file content is. Those files contain standard variables that are expected to be present by default. If you want, you can rename them into .fd or leave name as is. Of course you need to point you VM description to the file you want to use.

Those files are intended as templates that you copy into private file specific to each VM.

So would it make sense to copy /usr/share/qemu/ovmf-x86_64-vars.bin to e.g. to the domain folder e.g. /mnt/VM01/ and have your VM XML look like this:

    <type arch='x86_64' machine='q35'>hvm</type>
    <loader readonly='yes' type='pflash'>**/usr/share/qemu/ovmf-x86_64-code.bin**</loader>