unable to boot system after starting a QEMU-KVM virtual machine

I have a weird problem with QEMU-KVM that I had never had be four.

So I have a HP ProLiant server that runs OpenSUSE tumbleweed. The server runs QEMU-KVM as a virtual server. The server can make virtual machines and run them perfectly (remotely made with virt-manager). However, once virtual machines are ruing, and the host server is rebooted, a device /dev/disk-by-id** (I don’t remember the details, but I don’t want to recreate the problem and half to reinstall everything) hang the entire system by 90 seconds and puts the system in rescue mode.

When no virtual machines are even made on the system, the server can be rebooted flawlessly a million times and not have any errors. But when one virtual machine is made and the server goes down and back up however, the system only boots to rescue mode. The only theory I have that it would be the RAID controller becasue the device that holds the system from booting is /dev/disk-by-id** with a really long name and hard to remember. And the whole OS is installed on a RAID controller. There are only three partitions, the root (/) as EXT4, a swap partition, and a btrfs home partition (/home).

And like I said, the issue only occurs when QEMU-KVM is active with a virtual machine. So any help with this issue will help because I had researched the issue and had not come up with what I needed.

And BTW: the HP server has the issue, but i have a dell server that is setup exactly the same way and the dell server boot fine with no issues either. So maybe its a miss configuration?

What guest do you run???

Maybe a bad setting in /etc/fstab of the guest if Linux.

The Guest VMs are windows server 2008 R2/ 2012R2/ and 2016 with a Xubuntu running a server.
The unfortunate issue I have is that the server running Open SUSE wont boot on a reboot or shutdown when virtual machines are made or active.

Well dev/disk-by-id** error indicates that some where a Linux OS is looking for a partition that may no longer be available. SO check ALL /etc/fstabs being used… Best to try and capture the exact error since we can not look over your shoulder.

Need more info,

A Guest causing a HostOS problem like unable to boot is highly unusual. Is your Guest configured to start on boot? If so, then you might take a look at timing… Whether the Guest is launching before all dependencies are met, particular mount points.

Speaking of mount points,
As Gogalthorp describes, perhaps the most common cause of any system not booting the way you describe is a broken mount point. You should consider everything related to mount points, including the suggestion to inspect the fstab file. Especially in Enterprise setups, mount points might point to different physical disks for performance, isolation and management purposes. If for instance you’re deploying an OpenStack or similar, you might also be launching instances from a common image but storing machine-specific differences in a different volume.

Keep in mind that the fstab is only a starting point. It’s the most common place to describe mounts during system boot, but Linux today supports numerous ways to mount, and in every case you need to make sure that dependencies (like a subsystem, network functionality, etc) are met.

Aside from understanding and analyzing your virtualization and physical setup from a management perspective, as always you should also start collecting concrete information about your error. Nowadays in openSUSE, that begins and might end with inspecting your bootlog. The following is an example that displays the bootlog from the previous boot, and displays it in reverse order (last entry first)

journalctl -rb -1


Ok, here is what I have

So with having two servers that function the same, the Dell server that uses QEMU on RAID dose **not **have a


line on the kernel parameters in yast boot loader tool. But the server that has boot issues has a resume line to the parameters.

The second one I have is the fstab. The original fstab uses UUID in the boot options as follow

UUID=e678ece9-1ce9-4439-956c-b681366d1941  /      ext4   acl,user_xattr  0  0
UUID=803cad09-86b4-4986-8190-1257568d14c9  /home  btrfs  defaults        0  0
# UUID=DC7E24F67E24CB5A                    /disk  ntfs   defaults        0  0
UUID=96e95a65-a914-479e-b92c-4526648375d1  swap   swap   defaults        0  0

As I edit the file (and made a backup be four hand)

/dev/sda1  /      ext4   acl,user_xattr  0  0
/dev/sda3  /home  btrfs  defaults        0  0
# UUID=DC7E24F67E24CB5A                    /disk  ntfs   defaults        0  0
/dev/sda2  swap   swap   defaults        0  0

In addition too, the server I have at home boots guest virtual machines when the system starts.

This means that you somehow changed the mounting settings, or that some program/script changed the fstab. Both versuins should work, if they’re from the same machine/hardware

What has been posted might only be partial information.

What is/are the physical subsystems of the machine? eg. Are all the drives internal SATA? Any RAID arrays and if so are there more than one? A combination of drive sub-systems, perhaps? Are there any iSCSI, SAN or network shares, or other remote/external drive sub-systems?