[Discussion thread, not issue related] YaST Virtualization options!

Hello, I’d like to learn about the Xen and QEMU or whatever the other one is, virtualization options in OpenSUSE, and how to use them. They seem interesting, as everyone loves VM’s and we can now run more than one system, or VM or a single machine, which to me is a bit confusing if it’s a low power system, but I’d understand it if it was a monster system with 32 threads and 64 GB of ram etc.

I saw something about booting the system with the virtualization tools, but I didn’t see that in the boot menu, so I have no idea how to use the options. i have openSUSE XFCE setup in Vbox so I can play with it.

There are entire books devoted to what you’ve asked, it’s not likely anyone can write a brief Forum post that describes fully the features and differences between each choice, and you’re only asking about 3(or 4) different options… Xen, KVM, QEMU, and possibly Xen HVM(Xen’s full virtualization stack comparable to KVM’s QEMU).

Without doing more than scratching the surface…

Xen and KVM are somewhat comparable in that both are basically “paravirtualization” solutions that require and utilize CPU hardware extensions (Intel and AMD) to accelerate processing and management. The underlying architecture is quite different, though… Xen does it one way using a Dom0 while KVM does not. The KVM “Gen2” architecture is also used by practically every other paravirtualization in existence, including VMware, Virtualbox, Parallels, and Hyper-V.

Although CPU and memory paravirtualization have been around for the better part of the last decade, Device I/O has been around a lot less longer, and is still evolving. And, perhaps this also explains why practically every virtualization technology has diverged somewhat in device I/O support except where in most recent developments you can also find hardware support although even in this area you can find divergences. But, those differences largely impact minute performances not likely noticeable on a single, small machine like what you are likely setting up.

While Xen requires extensive kernel re-architecture to support Dom0, KVM is already part of the mainline kernel so can be run easily without kernel modification.

QEMU has been around for the longest time(some parts might even pre-date paravirtualization) with the unique capability to fully emulate practically any hardware in existence, both CPU and I/O devices. So, for instance this might be useful for ARM developers building for smartphones and tablets (plus more). Maybe you have an ancient piece of software which was written for x286 or even an 8008 processor. Maybe you have software that requires running on a Sparc or an Alpha. All those are non-x86 CPU hardware that ordinarily can’t be run on today’s x86/x64 machines, but QEMU can do that for you… at a price. The extra translation that’s required to support non-native CPU instructionsets exacts a heavy price.

QEMU as a project has been absorbed within and now runs as a subset of KVM, so now KVM natively runs a small <non-full emulation> part of QEMU, but you can also install the full emulation parts to support the processors I mentioned above.

If you choose Xen instead of KVM, QEMU has also been added as part of Xen and implemented as Xen HVM. My personal research and experimentation suggests that although promising, it’s not yet as stable and developed as QEMU running as a part and extension of KVM. Traditionally, “full emulation” did not have hardware support and could only be done fully in software, but this <might> be changing (I see no real motive for AMD and Intel to support non-x86 instructionsets, but plenty is now possible with modern modular CPU microcode that wasn’t possible 5 years ago).

Lastly regarding booting,
It has everything to do with what I described above regarding what is built into the mainline kernel vs when a complete re-architecture is required…

When you install and run Xen, a Xen kernel is offered on bootup besides the normal kernels. Select this if you want to run Xen Guests.

When you install and run KVM, because it’s built into the mainline kernel, you don’t see a special kernel. You use your existing and simply launch the User tools like any other application, which typically are libvirt (virt install, vm install, virt manager).

HTH,
TSU

Additional,
You describe currently running your openSUSE in Virtualbox, not as your HostOS on bare metal.

Until relatively recently, this was theoretically possible but not practical, the multiple virtualization layers meant your “Guest running within a Guest” would be dog-slow.

But, nowadays generally every virtualization technology supports Guest direct access to hardware, avoiding excessive virtualization layers so although a slight penalty is still exacted, it’s now practical to at least experiment with Guests running within another Guest.

The procedure to implement this varies in each virtualization technology but on Virtualbox,

  1. Open the Guest’s Settings > System > Acceleration tab
  2. Check the box “Enable Nested Paging”
  3. In your case you should also choose the KVM Paravirtualization Interface if you’re running KVM or QEMU, or leave it at default if you’re running Xen.
    Click “OK” to save the setting.

TSU