Moving a vm from vmware to kvm.

I’m looking to move a windows virtual machine in vmware’s format to kvm as I’m so tired of kernel updates breaking vmware. I’ve read some internet article that state you have to convert the disk, I just wanted to see if anyone here has done this and the easiest way to achieve this would be. It’s a windows 10 image. The following url is what I’m looking at doing! Has anybody taken this route or is there another way that’s better.

Another question is, can I create a subvolume for my virtual machines off of / and turn off cow since I’m reading it’s bad for performance? I wanted to use just one drive and TW has the entire drive so I’d like to just create a subvolume.

I’m not using any windows VM, so I don’t know for sure what would work.

If it were me, I would probably download the “clonezilla live” CD (or iso). Then I would boot the VMware system from that, and see if I can make a backup to an external drive. The next step would be to create a KVM machine with suitable parameters, and boot that to the “clonezilla live” iso. And then see if I can restore from the backup.

Converting disk image format is trivial. The real problem is potentially different underlying hardware.

Make sure you know current configuration, what devices are used to access boot disk. You will need to use the same hardware in definition of QEMU VM. In the worst case you may need to migrate Windows to some standard adapter first, common to VMware and QEMU. Same is likely true for video, reset to standard VGA until you migrated and installed suitable QEMU drivers.

Check first whether your current disk format is supported by your new virtualization.
If it is, then copy the diskfile and try to create a virtual machine using it.

If the current disk format is not supported or you just want to change to something else, then do a search “convert … to …” and run the utility. If you’re not sure whether you found the right utility, post here and ask first. BTW - It’s why I generally build my Production machines to use the “raw” disk format, although it might perform a little less than whatever the recommended might be, it’s universally recognized by all virtualization.

Recommendations before converting…

  • Consolidate your diskfile into one file. For this reason I always create my diskfiles as a single file instead of multiple files which is often the default (typically 2GB files which presumably is for deploying on 32bit file systems).
  • optimize your working physical disks first, a highly fragmented file on an HDD can take much longer than a disk where the files have been compacted and defragmented.
  • Prepare your diskfile by zero-ing out the empty space in the diskfile and compact it before converting.

Ordinarily, your old system should run just fine in new virtualization, no matter the virtualization the virtualized hardware is known and embedded in the kernel nowadays. But if you do run into a problem then a number of imaging backup apps might fix the problem… Those apps are often created with the ability to restore to dissimilar hardware.


Thanks, it’s just a windows 10 image that I was using under vmware on arch…and it’s a single vmware virtual disk file. I’m just tired of every time a new kernel is installed vmware might or might not be able to compile a new driver so instead of all those headaches I’m going to try kvm with virt-manager for a replacement.

Once installed, ordinarily you shouldn’t need to update again (with certain exceptions).

  1. Is why Tumbleweed is not recommended as a Production virtualization platform. And if home Users want to avoid major changes all the time, LEAP should be preferred over Tumbleweed.
  2. Verify that whatever app you’ve installed that requires kernel modules builds the modules with DKMS, it’s a special package that should ordinarily auto update the kernel modules automatically when a new kernel is installed. Some apps like Virtualbox have recently started to include DKMS in its package dependencies or provide it as part of the installation, but ordinarily you have to add it yourself and usually it’s as easy as just searching “dkms” and making sure the package is installed on your machine when you initially install or build your kernel modules.


AFAIK, QEMU will run the image without any conversion required… else just use;

qemu-img convert -O qcow2 <image>.vmdk <image>.qcow2

I’d also ensure it’s on an xfs partition somewhere and just use the virt-manager to add as a storage pool.

I have qemu here working a treat under Tumbleweed, never misses a beat after a kernel upgrade…

How does qemu compare to vb for feature set, e.g., usb 3 support?

Since qemu was absorbed into KVM, for most purposes including device I/O like USB, just running KVM would be your best solution.
For special uses, primarily hardware platform compatibility like running on a different architecture than the x86 family like ARM, SPARC, i286, etc then qemu can provide a full emulation mode.

I don’t know if things have changed, but if you intend to manage with libvirt’s vm-manager, it requires a decision to manage all your machines the same way… ie you need to choose KVM or QEMU. Don’t know if you can make any exceptions by handcrafting the machine’s config file.

Best to nail down your requirements before exploring… unless you’re just curious and have the time to explore.


What user Tsu2 says :wink: I have special requirements, gpu and sata pass-through (well and USB…) I see no reason KVM and virt-manager won’t do what you need… Nothing stopping you from using both, eg to spin up a live USB image to try out…

Just reporting back…I followed the link in the original post I made and it worked fine. I had split files so I used vmware tools to merge them into one vmdk file and then converted that into a qcow2 and imported it via virt-manger. It booted right up! The only issue I ran into was when I installed the virto drivers as shown in the howto. The fedora virt drivers installed just fine but changing the disk from sata to virto would cause it to blue screen. The solution was to use bcdedit like the following and it worked like a charm. Just reporting back for documentation purposes.

  1. Open an elevated command prompt and set the VM to boot into safe mode by typing
    bcdedit /set {current} safeboot minimal

  2. shut-down the VM and change the boot device type to virtio.

  3. boot the VM. It will enter in safe mode.
    Note: In Safe mode all boot-start drivers will be enabled and loaded, including the virtio driver. Since there is now a miniport installed to use it, the kernel will now make it part of the drivers that are to be loaded on boot and not disable it again.

  4. in the booted VM reset the bcdedit settings to allow the machine to boot into the Normal mode by typing (in elevated command prompt again):
    bcdedit /deletevalue {current} safeboot

  5. Done.

Your need to edit the Windows bootloader suggests that your Fedora does not boot from the same GRUB2 as your openSUSE.
Instead, your system appears to boot into GRUB2 (Tumbleweed), then chainloads into BCD (Windows) then into GRUB2) Fedora.
If it’s working, I wouldn’t advise modifying your boot sequence but if you run into problems in the future, you could probably make an entry in your openSUSE GRUB2 to boot directly to Fedora.


Regarding the virtio drivers for a Windows Guest on KVM,
The Fedora ISO you seem to have used looks like a good option and of course can work because installing Fedora built virtio drivers in Windows has nothing to do with what is happening on the openSUSE HostOS although there are a few things you can or should do (I’ll provide links below).

First, if someone didn’t install virtio drivers from a Fedora build, you can clone the drivers from Github (into your Windows)
The problem with using the Github drivers is that they are unsigned (by Microsoft) which will throw up repeated warnings.
So, I recommend using the Fedora ISO instead which at least is signed by RHEL…

On your HostOS, there are a number of ways you can expose virtio to the Guest…
For just the disk you can rely on the reference link in the @OP original post but there can be much more…
For common settings, search “virtio” in the following document

For a more extensive description of virtio settings which includes everything in the above documentan (search “virtio” in the document)


Before I switched to SATA pass through, the fedora drivers worked fine for windows, just need to select at install. I had a spare PCIe mini slot on my motherboard so just used that for a 4 port SATA controller now windows sits on a 60GB SSD :wink:

I’m not sure what you’re talking about, it’s a window 10 vm that was converted from Vmware to kvm. Vmware used the sata driver so that’s how windows was installed and now I wanted to use the virtio on kvm in place of the sata driver from which it was originally installed to use. Anyways…no chain loading anything I’m booting a windows vm from kvm.

I’m not sure what the issue is. When I migrated my Win10 VM from VMWare to VirtualBox, I kept the vmdk format. I initially kept the vmdk format in KVM and carefully matched the VM’s hardware characteristics to its VB setup. I had to combine all the snapshots and multi-parts (used the 2GB pieces format before) into one base disk file with a VB utility, but after that it all worked fine. I ended up converting the vmdk format to KVM’s qcow2 using qemu-img as described earlier in this thread. That went perfectly and that’s the VM disk that I’m using now. No boot issues along the way, other than KVM not reading the 2GB-each multi-part format from VB, which was easily fixed by combining them all together with the VB utility. Be sure to select the newest VB snapshot when combining.

One obvious difference is that hard disk identification changes - different vendor, different product. I think, it is enough to invalidate Windows activation. E.g. QEMU hard disks have “QEMU HARDDISK” as vendor/device. Did you also emulate original HDD inquiry information?

Also overall system information is changed (different system vendor, BIOS version, ACPI tables etc). E.g. on QEMU it returns DMI: QEMU Standard PC, while on VMware it is obviously something entirely different.

Both motherboard and hard disk change definitely will result in activation error.

I had two oem licensed w10 pro virtualbox vm’s. One started as a licensed Windows 7 vm, the other as a Windows 8 vm. They were both upgraded through promotions at the time to w10. I just converted both to KVM and the licenses were reported as invalid, and I was given the choice to buy activations or repair them. I chose repair, that my hardware had changed and when logged into my MS account I was able to select the appropriate existing license and they are now licensed under KVM. I don’t know if it had any effect in the end, but when I set the KVM vm up, I used my VB machine uuid for the KVM uuid.

I was surprised this reactivation worked, given that they were both oem licenses and not eligible for moving to new “hardware”.

Yes, the activation would be lost. I lost the activation on my VM when I moved from VMWare to VirtualBox several years ago, so it wasn’t a player going to KVM. I have moved several times since I initially installed Win10 in the VM, and cannot find the original disk or its codes, so it is what it is. The VM works fine, so no worries. I’m not going to pay MS twice for the same thing because of their activation nonsense. I paid for it once and that’s good enough.