Partitioning Help

Hello everyone!

I’m a newbie to openSUSE who somehow wrecked his last install of openSUSE (42.1). :stuck_out_tongue:

I placed an unencrypted ext4 /boot partition and encrypted btrfs / partition on my solid-state drive (/dev/sda), and an encrypted xfs /home on my hard-drive (/dev/sdb). They were all part of the same LVM.

Everything worked fine until I did an update. I’m not sure what happened, but this is what I saw:

http://i.imgur.com/VHtTyMy.jpg

The UUID of the disk it didn’t find corresponds to /dev/mapper/system-root.

I figured it was some kind of fluke, so I reinstalled. Same partitioning setup. And once again… after an update… same problem!

Even in my brief time with openSUSE I fell in love with it, so I don’t wanna give it up… any tips on what I can do?

Something is wrong with the LVM or with the “initrd”. I’m guessing it’s an “initrd” issue.

Boot live media. Then see if you can decrypt:


cryptsetup luksOpen /dev/sdaX cr_lvm
vgchange -a y
ls /dev/mapper

If that all works, check if the UUIDs match what is expected. The command “blkid” will list UUIDs.

And if that all looks good, mount the root partition:

# mount /dev/mapper/system-root /mnt

and check the output from

cat /mnt/etc/crypttab

Maybe post the output from that last command.

Note that you should change “/dev/sdaX” to whatever is appropriate for your encrypted LVM partition.

There’s only one hitch: I already formatted the system using /dev/zero so that it would be fresh for my third attempt!

I can only wonder if I did something wrong in partitioning, since this issue happened twice in a row and I’ve not seen this issue anywhere else on the internet. I didn’t install any extra repos… just used the system for a while, and it rebooted fine multiple times… then I did an update and apparently it borked my crypttab or something.

After your next install, post content of “/etc/crypttab”.

I’ve seen some weirdness on one system. Changing the “crypttab” entry and rebuilding the “initrd” solved that problem.

The “crypttab” entry uses permanent device names. But it seems that some permanent device names aren’t all that permanent.

You know, I think there might have been an initrd update somewhere in there. I wonder if that’s what nuked the boot.

I’ll probably be reinstalling later today or tomorrow, so stay tuned. :stuck_out_tongue:

as of yesterday i found myself facing the same point. In my case it was the uuid of the swap- partition which i had replaced.

Now i know that removing the swap-partition without doing mkinitrd corrupts boot heavily - but none of tools would think about that (at least not yast-partition).
Maybe this has been same for you - some tool/update didn’t execute mkinitrd after changing something.
What you wrote about that uuid not beeing part of /dev/mapper… let’s me think it could be another partition not beiing part of lvm → swap? check for partition like /dev/sdX

so after “mount /dev/mapper/system-root /mnt” as nrickert wrote, you can do
chroot /mnt
and execute
mkinitrd
41

i had to comment out old uuid in /etc/fstab before rebuilding initrd, to make shure it won’t include again.

Modifying a partition can change the UUID. The UUID id a random number generated when a partition is created.

Still haven’t reinstalled yet, but I can say with certainty that I checked the UUIDs as reported in blkid against the UUIDs reported in fstab and they did match up appropriately. I didn’t check crypttab but I assume it was the same.

Typically, “/etc/crypttab” uses the device-id rather than the UUID. On one system, I found that switching it to UUID solved some problems.

the one that is mentioned in the screenshot is the uuid to look for. it does not look like a device-id to me. rather like an ordinary partition.
initrd trys to find it during startup and fails due to
a. beeing removed (my case)
b. initrd not having drivers included to open device (e.g. multipath is not included which is needed by lvm to check for uuids) - i don’t know for crypt what is needed there

if you don’t want to reinstall, you need to find out what volume had that uuid dracut-shell complains about on your screenshot.
if you know, you
1.) need to remove it from initrd by removing entry in fstab and rebuilding initrd or
2.) if that partition is important for boot, include it and rebuild initrd with the needed modules (multipath lvm ?crypt?)

maybe something rebuilded initrd (for example patching a new kernel will) and forgot to include modules for your setup.

Hint for multipath: if you need it, check if the service is running and if device is set to multipath in yast-partition.
If multipath is disabled in partition-tool i think it will not be included in initrd, so lvm wont work.
Yast mixed that flag for me and changed system to multipath while only removing swap. After disabling it and rebuilding initrd it was ok again.