I thought kernel 4.10 would fix the random boot failure for LUKS with kernel 4.9.x

But now it can’t boot at all. Am I the only having the problem or you don’t use LUKS encryption?

Any suggestion?

I’m not currently having a problem. But then I crippled “plymouth”.

Here’s what I did. I looked for “splash=silent” and changed that to “nosplash”.

The first step was to make that change in “/boot/grub2/grub.cfg”. That allowed me to boot.

I then made the corresponding change in “/etc/default/grub”, so that when “grub.cfg” is updated, it gets that value.

Everything has been working since then.

Okay, one exception. As an experiment, I changed “nosplash” to “splash=silent” on the grub2 boot screen. And I had the same LUKS failure (no prompt for encryption key). So I rebooted without that change, and all was fine. I won’t retest until I hear rumors that all is working.

Of course, I don’t get the plymouth screen. However, all else seems to be working. I have two LUKS partitions with the same key, and still only have to enter the key once.

Actually, at one time, I uninstalled “plymouth” and locked it so that it would not reinstall. But one of the updates complained about a conflict. So I allowed it to be installed again. My current understanding is that “plymouth” is still running, but is not displaying its splash screen.

After reading your post I seem to realize my previous problem could have been fixed by editing the grub. But now it seems to be a different issue (not sure), because it can pass the password prompt, however giving another error showing
“Failed to start LVM2 PV scan on device 254:0”
See systemctl status lvm2-pvscan@254:0.service" for details.

Then here’s the result:

It’s hard to comment without knowing your actual disk and volume configuration. It says “refusing to activate partial system/home” which implies that some disk that is part of this volume group is missing.

Here’s the partitioning. I think I used automatic LVM partitioning during installation as my understanding of LVM is a mess despite several tutorial reading.

Wonder why 2 LVM. LVM is a container that you can put multiple logical partitions in. Don’t see any reason to have more then one per drive. You can link multiples to look like one big space even across drives. So you can have one space that is bigger then any single drive in the system.

I have never done it that way. I’ve only ever assigned one partition to the LVM.

Make sure that both partitions are listed in “/etc/crypttab”.

If I had a situation with two adjacent partitions, I would boot live media before the install, delete both partition, and replace with a single combined partition for the LVM. So I don’t really have any experience with the problem that you are having.

/etc/crypttab has both partitions included.

Like I said before, I have no good understanding of lvm so the lvm partition setups were prolly suggested during installations.

Anyway, I had no problem before. This issue only occurs after kernel update to 4.10. Weirdly, booting to 4.9 kernel is also affected now.

Is there a way out of this instead of complete re-installation?

Then probably not a kernel issue.

It is possibly a “dracut” problem. Something is wrong in the “initrd” that it appears to be failing to decrypt one of those partitions.

Is there a way out of this instead of complete re-installation?

I don’t have any experience in this particular situation. You could try a bug report. Maybe someone with “dracut” expertise can help.

There is not enough information to even guess. Start with providing full “journalctl -b” output (upload to http://susepaste.org).

I’ve run it several times. The only related info is “Failed to start LVM2 PV scan on device 254:0” in the “journalctl -b -p err”.

Perhaps I’m missing something, but in your image of your partitioning setup, it shows two luks volumes in the left box. But in the right side, it appears that no mount point was assigned to either.