Boot / LVM issues after upgrade from leap 42.2

Hello all !

Note: I already posted this issue in the German forum, but I was recommended to re-post it in the english forum for seeking further assistance.

After an upgrade from Leap 42.3 to Leap 15, which went without and serious error messages, the system fails to boot because the root file partition (which is on a LVM LV) cannot be found. Further checking revealed that for some reasons, the PVs which make the VG which contains the LV appear twice, and the LV (“VG_root/root”) is not activated:

lvm> vgscan
Reading all physical volumes.  This may take a while...
WARNING: PV Xfqqkk-pu88-OsIP-TRz7-fjXi-Z9l0-T3dzII on /dev/mapper/eui.0025385571b12544-part2 was already found on /dev/nvme0n1p2.
WARNING: PV qxDlOs-6KJ1-HbKP-eIjK-7Gwb-qGzc-wnqsjx on /dev/mapper/eui.0025385571b12544-part3 was already found on /dev/nvme0n1p3.
WARNING: PV GRmNZo-142Z-sJMx-LUS9-wPe7-fsmF-G2PBVx on /dev/mapper/eui.0025385571b12544-part4 was already found on /dev/nvme0n1p4.
WARNING: PV Xfqqkk-pu88-OsIP-TRz7-fjXi-Z9l0-T3dzII prefers device /dev/mapper/eui.0025385571b12544-part2 because of previous preference.
WARNING: PV qxDlOs-6KJ1-HbKP-eIjK-7Gwb-qGzc-wnqsjx prefers device /dev/mapper/eui.0025385571b12544-part3 because of previous preference.
WARNING: PV GRmNZo-142Z-sJMx-LUS9-wPe7-fsmF-G2PBVx prefers device /dev/mapper/eui.0025385571b12544-part4 because of previous preference.
Found volume group "VG_root" using metadata type lvm2
Found volume group "VG_data" using metadata type lvm2
lvm> lvscan
inactive          '/dev/VG_root/root' <100.00 GiB] inherit
inactive          '/dev/VG_root/home' [100.00 GiB] inherit
ACTIVE            '/dev/VG_data/srv' [16.77 GiB] inherit

I installed a small Leap 15 test installation on a free partition /dev/nvme0n1p5, that boots correct. I re-built the initrd/initramfs but that did not help.
Any Ideas what went wrong ?

Regards, Sven

Well, I have that fixed :), yet I still don’t know why it worked… just in case of interest:
The issue was not with the LVM, but with the kernel modules “nvme_core” and “nvme”. They were included in the initramfs via:

force_drivers+="nvme_core nvme"

When I removed this line and rebuilt the initramfs using dracut, the system booted.

However, what I do not understand is why these modules caused the trouble, as once the system is running these modules are loaded anyways, checked with lsmod. So, if anyone knows the reason please let me know, just for interest.

Regards, Sven

Purely speculating, but it could be that the nvme modules were incompatible at the time that the initramfs was built.
Nowadays, the modular Linux kernel can load many kernel modules at any time, and maybe a later stage in the flow checked for dependencies and loaded the module at that time.

Purely speculating…
TSU

Hi TSU, that had been my thoughts, too, and so I recreated the initramfs at first time without removing those modules, but without success. So they should have been compatible.