Nested Virtualization With ESXi as KVM/Quemu Virtual Machine

So I have ESXi installed as VM on KVM/Quemu and I run into the problem. When setting up my VM I clicked COPY HOST CPU CONFIGURATION, but my VM cannot run another VM because Vitalization is not enabled.

So I connected to ESXi using SSH and run following

esxcfg-info | grep "\----\HV Support"

output is:

|----HV Support............................................0

meaning that Virtalization is not enabled for KVM virtal machines. Does anybody know how to enable virtualization for guests on KVM/Qemu?

Hi
On the host system, is virtualization enabled in the BIOS? On the ESXi VM is/can virtualization be enabled in the BIOS? Normally ESXi runs on bare metal…

On host it is enabled (in BIOS) I have other VMs in KVM including even Windows Server 2016. How do I enable it in ESXi?

Hi
Have a read here, YMMV;

This is not ESXi this is KVM problem all VMs that I have have not virtualization capabilities.

Hi
Your KVM setup for the ESXi host needs to have access to the CPU’s etc, so this vm needs to be configured similar to the link I pointed to, hence my YMMV comment.

What is the current KVM/virt-manager/virsh config for the ESXi host?

Like all other virtualization, any kind of nested virtualization is considered YMMV (experimental, not officially supported).
I’ve personally experimented periodically on various virtualization technologies sporadically on a whim, but have had only middling to no success in the past.

So, that should at least be a point of reference for others…

  • Keep your expectations in check.
  • Always look for the latest info, dates are important. Exclude for the most part anything published over a year ago as a reliable guide but still read articles looking for individual gems of technical importance.

The article Malcolm suggests is therefor a bit old to be used reliably.
It doesn’t look completely unuseful, but more recent references will describe things differently.

Skimming a number of articles on this subject (Nested virtualization, specifically an ESXi Guest on a KVM host)

  • You have the option of implementing QEMU instead of KVM paravirtualization. QEMU is a related but different virtualization technology that was absorbed into other virtualization like KVM and Xen a few years ago. QEMU “proper” has certain advantages in emulation over the technologies its been absorbed into, and <maybe> this is one situation it might be helpful. The following writer made this work… I don’t know if the required features are already in openSUSE KVM-QEMU, but if not the article describes how to build and enable…

https://www.server-world.info/en/note?os=CentOS_7&p=kvm&f=12

  • The current official KVM reference for enabling nested virtualization. Short, but important info you shouldn’t overlook.

https://www.linux-kvm.org/page/Nested_Guests

  • The following is old, so maybe none of what is in the following article is relevant. But, the guy did overcome several obstacles for the technologies of his time. May be a useful reference if you run into anything similar

https://rwmj.wordpress.com/2014/05/19/notes-on-getting-vmware-esxi-to-run-under-kvm/

Also,
IMO is notable that no article I skimmed including most recent ones that were using libvirt mentioned that you’d also want to enable the setting in the Guest properties that turn on nested virtualization. Of course, this setting is probably ineffective if nested virtualization isn’t enabled at a lower level (see the KVM reference I provided you above)

Lastly,
In my own experiments I determined that each time your nested virtualization changed hypervisors, you’d likely experience further performance degradation. Because of that, my more recent experiments have focused on using the same virtualization technology at every level. Just the ability to use different hypervisors with any prospect for working at all is probably relatively recent, within the past 2-3 years.

On a slight tangent,
I also did a search to see if anyone has built ESXi into a Linux container…
Seems to me that the prospects would be good and would avoid all this mess involving nested virtualization by removing one of the virtualization layers (Anything running in a container is running on bare metal).
Surprised that I didn’t get any search hits. Either people have tried it and failed without saying anything or I’m just lousy at keyword selection.
This is how I currently run “nested” applications but in reverse of what you’re trying to do… Instead of running virtualization in a container, I run containers in my Guests and this works for every virtualization technology… And there is plenty about VMware building out this approach.

Good Luck,
TSU

thank you @tsu2, here is what worked:

virt-install \
--name ESXi \
--ram 16384 \
--disk /data/vms/VMWare/VMWare.qcow2,size=100 \
--cpu host-passthrough \
--vcpus=8 \
--os-type linux \ 
--os-variant=virtio26 \
--network bridge=virbr0,model=e1000 \
--cdrom /home/abuser/Downloads/VMware.iso \
--features kvm_hidden=on \
--machine q35 

One question not related to ESXi directly, but how can I improve performance of VMs in KVM? When I open VM only one CPU thread is used to the maximum and I have 4 cores with 2 threads on each core :(. Any suggenstions?

For anyone who feels they have grasped basics for their virtualization technology and are looking for other more advanced techniques, I can recommend the SLES documentation, keeping in mind recent historical criticisms I’ve made that primarily focus on incorrect terminology and basic concepts primarily caused by authors likely with a strong Xen background believing they can apply same to KVM. SLES documentation is live though, so it’s also possible that many/most/all criticisms I’ve made in the past have been addressed, or might not be at all. But, regardless of the problems that have existed, there are a great many gems of info in the documentation that might be overlooked elsewhere so it’s a worthwhile read for those who have a strong enough foundation not to accidentally learn about things incorrectly.

Here is the current SLES documenation
https://www.suse.com/documentation/sles-12/singlehtml/book_virt/book_virt.html

Regarding optimization, although KVM’s article is very short each item described is well worth considering
http://www.linux-kvm.org/page/Tuning_KVM

Red Hat always has excellent guides for anything Enterprise, their article applies to deployments on openSUSE as well but of course we use our own tools (eg zypper vs yum)
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html-single/virtualization_tuning_and_optimization_guide/index

If you’re familiar with the pros and cons of CPU pinning, you can experiment. But IMO unless you have a specific machine running specific highest priority functionality, this probably should not be considered too highly
https://www.stackhpc.com/tripleo-numa-vcpu-pinning.html

There are of course other resources and practice I haven’t described…

Glad to hear you have your virtualized ESXi running and thx for your configuration, it’ll be a big help for others that follow,
TSU

Thanks @tsu2 it is much better now RH optimization helped. I do not know if I should ask new question or can I just ask here, is nvidia driver working with Xen kernel now?

Better to start a new thread when you have a question that’s not in keeping with the original Q in this thread…

TSU