I actually saw that bug report, but forgot about it…
Let’s hope you get a reply now.
Personally I don’t really have another idea currently, except trying to run mkinitd on the broken system.
But that may be difficult if you cannot boot the system in the first place.
I suppose it should be possible to boot from a LiveCD or similar and then switch to the installed system with chroot (after mounting the LVM of course) to do that. But I cannot really give further instructions for that either…
Since 20181030 was available, I went ahead and removed the locks and upgraded. It broke the system again. I rebooted into maintenance mode, tried mkinitrd, no change. I’ll try a little bit longer (days, maybe a week or so) but I don’t want to keep those held back too long. I may end up simply downloading the latest snapshot and reinstalling.
I have similar issues with tumbleweed upgrades since a few days.
Similar setup - DELL Precision (similar to XPS 15) Laptop, Kaby Lake CPU
encrypted /boot (ext2)
encrypted / (ext4)
encrypted /home (ext4)
all within a LVM on a LUKS partition.
when I upgraded to 20180128 or 20180126 (i am not sure which) which came with the 4.14.15 kernel and a big bunch of other upgrades - may be 1.5GB in total I was not able to boot any more with the new kernel.
With .15 there is no prompt for the encrypted root partition and it does not get mounted. After some delays i will get a dracut emergency console.
The kept kernel 4.14.12 (4.14.12-1-default #1 SMP PREEMPT Fri Jan 5 18:15:55 UTC 2018 (3cf399e) x86_64)
is okay. But I had to repair the grub / initramfs by booting an iso image into repair mode and then unencrypting, mounting /boot, /, /home … chroot … make … The usual procedure when the system does not boot.
I have setup tw to keep several kernel versions. As 4.14.15 gave issues, i removed it.
after removing the 4.14.15 kernel these kernel packages are left:
Well, if it works fine for you with the older kernel, it seems to be caused by a change in the kernel, and that would sound unrelated to the problem discussed here…
Btw, the latest/current kernel is 4.15.0…
removing kernel-default-base-4.14.15-1.6.x86_64 was not possible thou.
Why? What happens when you try to remove it?
kernel-default-base is intended for minimal VMs mostly, and is useless for normal systems, because it lacks most of the hardware drivers.
So you better should uninstall it.
My guess: you do have kernel-docs, kernel-docs-html and kernel-macros in version 4.14.15 installed as well, you’d probably need to remove them too as they may require a kernel 4.14.15.
I am quite sure it is not a kernel issue alone. Otherwise there would be hundreds of reports.
My guess, in my case it is a result of the combination of kernel, new udev/systemd, encrypted partition with LVM boot and root and may be more. And that seems a bit similar to the other cases. Just a guess and an idea it might be worth to drill down to details if there are more people with these problems.
When booting .12 I get the password prompt for the encrypted partition, but not when trying to boot .15. Is this a sole kernel issue? My understanding is the password prompt is issued by some other process?
I am a bit shy on trying .0 kernels. I prefer to wait for some .3 or so version.
kernel-macros is needed for the devel kernels. I need the -devel kernels to have the VMware workstation modules compiled for every new kernel.
I did not find a way to keep macros etc per kept kernel version.
Could zypper be setup accordingly?
It seems your hint about the base kernel was a good starting point to find the root cause. In the dracut console I found /dev/nvme* are not available. Probably because the modules for these are missing in the base .15 kernel which is now the only one left. There are no drivers for the NVME disks where root etc reside.
So in the current setup I am not able any more to pinpoint possible causes for the boot problems I had in the beginning. Issue closed.
I have now had this problem twice, both times after upgrade and both times I have managed to fix it. I fixed by mounting the btrfs on a Fedora live USB: The btrfs automatically does recovery actions when mounting, and reports quota system inconsistency and misplaced extents. So, I’m guessing that the experimental quota system that btrfs devs consider experimental is acting up some how. Or why else would upgrading the system corrupt the btrfs subvolume? And why would it it be fixed mounting it. Also, why cannot this be done in the recovery shell offered by the initramdisk. Does it not do btrfs?