lxc can't launch opensuse 15.2 VM

I have lxd + lxc + qemu + libvirt installed on an openSUSE 15.2 system, and I can launch containers from lxc but I can’t launch VMs. I’m trying with the opensuse 15.2 cloud amd64 images.

lxc is complaining that it can’t find /usr/share/qemu/OVMF_VARS.ms.fd (it’s looking in /usr/share/qemu because I added Environment=‘LXD_OVMF_PATH=/usr/share/qemu/’ to the lxd.service [Service] section - before it was looking in /usr/share/OVMF, which doesn’t exist).

I have no such file anywhere in my system.

I’m sure I’m missing something simple. Can anyone enlighten me?

Thanks
David

uname -a

Linux server 5.3.18-lp152.60-default #1 SMP Tue Jan 12 23:10:31 UTC 2021 (9898712) x86_64 x86_64 x86_64 GNU/Linux
sudo lxc --verbose launch --vm images:opensuse/15.2/cloud/amd64 oS-152-vm

Creating oS-152-vm
Starting oS-152-vm
Error: lstat /usr/share/qemu/OVMF_VARS.ms.fd: no such file or directory
Try `lxc info --show-log local:oS-152-vm` for more info
lxc info --show-log local:oS-152-vm


Name: oS-152-vm
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/01/24 21:18 UTC
Status: Stopped
Type: virtual-machine
Profiles: default
Error: open /var/log/lxd/oS-152-vm/qemu.log: no such file or directory
rpm -qa | grep -E 'lx[cd]|qemu|libvirt' | grep -Ev lxde | sort

liblxc1-4.0.0-lp152.2.22.x86_64
libvirt-6.0.0-lp152.9.6.2.x86_64
libvirt-bash-completion-6.0.0-lp152.9.6.2.noarch
libvirt-client-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-config-network-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-config-nwfilter-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-interface-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-libxl-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-lxc-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-network-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-nodedev-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-nwfilter-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-qemu-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-secret-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-core-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-disk-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-gluster-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-iscsi-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-logical-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-mpath-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-rbd-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-driver-storage-scsi-6.0.0-lp152.9.6.2.x86_64
libvirt-daemon-qemu-6.0.0-lp152.9.6.2.x86_64
libvirt-libs-6.0.0-lp152.9.6.2.x86_64
lxc-4.0.0-lp152.2.22.x86_64
lxc-bash-completion-4.0.0-lp152.2.22.noarch
lxcc-0.1.0+svn733-lp152.3.2.noarch
lxcfs-4.0.1-lp152.1.2.x86_64
lxcfs-hooks-lxc-4.0.1-lp152.1.2.noarch
lxd-4.10-lp152.2.21.1.x86_64
lxd-bash-completion-4.10-lp152.2.21.1.noarch
qemu-4.2.1-lp152.9.6.1.x86_64
qemu-block-curl-4.2.1-lp152.9.6.1.x86_64
qemu-ipxe-1.0.0+-lp152.9.6.1.noarch
qemu-ksm-4.2.1-lp152.9.6.1.x86_64
qemu-kvm-4.2.1-lp152.9.6.1.x86_64
qemu-linux-user-4.2.1-lp152.9.6.1.x86_64
qemu-microvm-4.2.1-lp152.9.6.1.noarch
qemu-ovmf-x86_64-201911-lp152.6.8.1.noarch
qemu-seabios-1.12.1+-lp152.9.6.1.noarch
qemu-sgabios-8-lp152.9.6.1.noarch
qemu-tools-4.2.1-lp152.9.6.1.x86_64
qemu-ui-curses-4.2.1-lp152.9.6.1.x86_64
qemu-ui-gtk-4.2.1-lp152.9.6.1.x86_64
qemu-ui-sdl-4.2.1-lp152.9.6.1.x86_64
qemu-ui-spice-app-4.2.1-lp152.9.6.1.x86_64
qemu-vgabios-1.12.1+-lp152.9.6.1.noarch
qemu-x86-4.2.1-lp152.9.6.1.x86_64

Been awhile since I’ve looked at LXC, but assume nothing has changed…

First question as always is whether you installed LXC using the YaST virtualization module,
And if you did, then the next question would be why you’re not using vm manager to launch your LXC containers.

If this is your first time using LXC with libvirt,
You should know that there are some major differences between LXC on its own and with libvirt.
Following is the openSUSE documentation on LXC with libvirt

https://doc.opensuse.org/documentation/leap/virtualization/html/book-virt/cha-lxc.html

TSU

My apologies, I should have been clearer about the history:

  1. installed lxd:
  • lxc pulled in as a requirement
  • can launch containers
  • can’t launch VMs
  1. added qemu as lxc requires this for VMs
  • still can’t launch VMs - now complaining about ovmf
  1. added ovmf
  • lxc / qemu still can’t find the ovmf files
  1. added libvirt as there were complaints about virtiofs missing
  • I think this was unnecessary as nothing changed.
  1. added LXD_OVMF_PATH=/usr/share/qemu
  • still can’t find the ovmf files …

No, I didn’t use the YaST2 Virtualisation module. Everything from the command line as this is a trial and I’d like to understand the required software stack.

:frowning: David

OK,
That’s a lot more clear.
AFAIK LXD is incompatible with libvirt. LXD andLibvirt are incompatible and different ways of managing containers.

After installing LXD from the openSUSE repos (I assume you did that as the easiest way, better than using a SNAP or source as described in the linuxcontainers official documentation),
Continue setting up, and launching your containers from the following link

https://linuxcontainers.org/lxd/getting-started-cli/#initial-configuration

Specifically regarding use of LXC and virtual machines, you have additional required configuration

https://linuxcontainers.org/lxd/advanced-guide/#difference-between-containers-and-virtual-machines

Your specific error about OVMF likely relates to how you obtained your image,
OVMF is the “Open Virtual Machine Format” used for porting virtual machines.
Note the required Server setting to set the OVMF path
https://linuxcontainers.org/lxd/docs/master/environment
I haven’t looked at LXC support for OVMF closely, but generally speaking OVMF is used only to port the VM, the machine image must be extracted and written to a runtime filesystem the virtualization technology (QEMU in this case) can read and write.
So, IMO big question at this point is the source of your VM… Is it an OVMF or is it something else?

TSU

After some thought,
The following is likely your critical error, Is your OVMF file mis-named or missing?

/usr/share/qemu/oS-152-vm.ovmf

TSU

Thanks @tsu2. I’ll have a clean out of the installed packages and try again.

The images used are the opensuse images downloaded from the linuxcontainers.org images set - the opensuse/15.2/cloud/amd64 image.

Regarding the error & OVMF, I think qemu is looking for the bios emulation s/w, and (from a 5 second search for the error message) lxc is using hard coded files named ‘OVMF.ms.fd’ & ‘OVMF_VARS.ms.fd’. I haven’t been able to find an option or config item which affects this apart from the LXD_OVMF_PATH env variable.

I have found a repo which says it provides these files (https://build.opensuse.org/project/show/home:jejb1:UEFI & http://download.opensuse.org/repositories/home:/jejb1:/UEFI/) which says it’s providing secure boot code (which I don’t want - I only want UEFI) but haven’t investigated the code used there yet.

I’ll try following their linuxcontainers instructions next.

Yours
David

Yes, all from openSUSE repos.

I dislike the idea of using SNAP within a working system, having experienced the pain of tracing unusual behaviour when virtual environments use a mixture of software from the OS’s software set and the virtual env’s software set - that’s part of the reason for going the container route: create dedicated, consistent,stable environments for specific services (package / software sets).

I’ve only run LXD containers directly, and not in a VM.
Piecing together my understanding of containers and QEMU,
I find it unique and odd that someone would package the containers separately from the VM BIOS files, in fact I’m not aware of anybody doing that with any virtualization technology… Typically the BIOS/UEFI environment is set and provided by the virtualization technology, and the Guest image includes the typical bootup files needed to boot in that environment.
In fact, as I was writing this observation, I paused and looked deeper into LXC vm images…
And more or less confirmed my suspicion…

  • VM images are specially constructed and configured compared to “container images.”
  • There are numerous ways suggested how to build VM images using various tools which are not the same as the tools used to build ordinary container images.
  • There are a few LXD public repos which are supposed to already contain working VM (and possibly container) images.
  • There is a particular tool that can be used which is supposed to set your VM environment variables automatically, but if the image was built using another tool, then you might be required to do almost anything to make things work. Therefor, it might be recommended to use images from one of the recommended repos that build images with as few fixed settings as possible.

Read the links I posted, plus as needed and liberally the more in depth documentation links depending on what you are trying to set up.
So, for instance if you are already running into a problem launching an image, I’d generally recommend you try another image repo before troubleshooting why the image isn’t booting properly.

TSU

Thanks @tsu2

I found this comment where Stéphane Graber (who should know) says it’s a problem with /dev/kvm.

I installed pattern kvm_tools (which pulled in loads of stuff) and rebooted … and am still in the same situation, with /dev/kvm existing, containers working & VMs not.

On ubuntu 20.04 LXD worked both for containers and VMs out-of-the-box, admittedly with SNAP install.

If I knew what to compare / copy over, I’d do so, but I’ve spent too much time on this already. Time to switch this box to ubuntu, unfortunately.

Thanks again for your time,
David

Aside from the fact you’re referring to a thread that’s a year old (If there is a show stopper error, you’d think it would have a high enough priority to be fixed within a year),
SGraber references the same issue I speculated… The problem is likely the way your image is created, not the LXD application itself.
Several times in response to different Users he references faulty or lack of use of cloud-init.

I’m suggesting you’re likely making the same mistake.
I don’t know where your images are coming from, but they’re likely faulty… And I suggested that the likely better solution is to use a different source of images instead of trying to patch the images you’re using.

TSU

Hi Tsu,

the images I used came from the https://images.linuxcontainers.org/ site (using the normal lxc launch images:opensuse/15.2/cloud command) and the ubuntu official images (using lxc launch images:ubuntu/18.04/cloud command). These are the same source and images I used for the successful trial on ubuntu 20.04.

I added no cloud or other config to any of the 4 launches, apart from the inclusion / omission of --vm. Both installs used a btrfs file system on an LVM lv as their default storage.

These are, I assume, standard image builds with all the necessary bits in the right places … I haven’t looked & wouldn’t know how to check. I also assume that the opensuse & ubuntu LXD installs get these images the same way from the same location.

Logged as bug 1181549

It looks like Canonical moved the location of the OVMF images … as well as at least one other issue.

Am wondering if the files are UEFI firmware files…
If so,
perhaps specifying the path in the hypervisor advanced configuration settings fixes the problem.

eg
In the following article, skip down to the Advanced Hypervisor configuration section and there is a setting to specify the firmware files

https://ostechnix.com/enable-uefi-support-for-kvm-virtual-machines-in-linux/

TSU