boot fails, no password prompt for LUKS root

Hi,
I had a hardware failure (SATA controller apparently going bad) and my system locked up. Now, I can get to GRUB and boot from two kernels (and recovery). After that, I’m never prompted to unlock my disks and thus the eventually times out looking for the locked partitions.

I’ve booted from USBs and the disks and files are all fine. I have multiple snapshots of the root volume. My thought is to rollback to a previous snapshot but I have no idea how. Most of the snapper rollback tutorials assume you have the GRUB options (default in Leap) or you don’t have an encrypted system.

Can someone please steer me to some relevant instructions?

Since you are able to check your root volume, you should be able to check “/etc/crypttab”.

See if the entries there are still correct. The entries list the encrypted volume. And they either list it by device name, by device-id or by UUID. If they use UUID, that should still be correct. But if you have made hardware changes due to failed SATA controller, the device-name or device-id might have changed. And if those have changed, it would explain why you are not being prompted for LUKS password.

For the moment, I’m just asking you to check. If that is the problem, then you need to fix “crypttab” and regenerate the “initrd”. We can look at how to do that later. First, please check.

Ok, the UUID in /etc/crypttab still matches the encrypted partition. So that suggests that the last changes made to initramfs were correct, right? I’m still learning a lot of this.

I’ve swapped the SATA controllers and cabling around a few times prior to the failed boot but I’m almost positive nothing has changed since it crashed. Meaning, it had at least one successful boot in its current arrangement.

I realized that the snapshots won’t help at all since they are entirely inside the encrypted partition. I was hung up the rollback idea before I isolated the problem to failing to unlock the drive :smiley:

Thanks for the help

Then that is not the problem.

Maybe something else is wrong in the initramfs. Hardware changes might change which kernel modules are needed.

My suggestion would be to try regenerating the initramfs (or initrd).

To do that, you will need to mount the root file system in your rescue boot. I’ll assume that you are mounting it at “/mnt”.

Also mount anything else that matters (for example, “/boot/efi” on a UEFI system, which you would need to mount at “/mnt/boot/efi”).

And then:


mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys
mount --bind /proc /mnt/proc

You also need to mount the “btrfs” subvolumes. I don’t use “btrfs”, so I’m not sure of the best way to do that. You could go through “/etc/fstab” in the mounted system (i.e. “/mnt/etc/fstab”) and come up with a mount command for each. But maybe try:


chroot /mnt
mount -a
exit

Once you have the subvolumes mounted, go back to a “chroot /mnt” session, and run “mkinitrd”. And see if you can reboot into your system after that. Or perhaps even use “mkinitrd -A” to force all possible modules to be included.

That did it! I’m back up!

Thanks so much!

So the init system adds specific modules for local hardware. Any downside to using the mkinitrd -A option? It seems like it would give me a failsafe option of booting from another motherboard.

I’m glad to hear that. And thanks for reporting back.

Any downside to using the mkinitrd -A option?

It will result in a larger “initrd”, and I guess that uses more memory when it is loaded into a ramdisk. But, otherwise, I don’t think there’s a problem.

It seems like it would give me a failsafe option of booting from another motherboard.

Yes, that’s probably right, and probably the main use for this option.