Hi,
I’m experimenting with LEAP 15.2 to find the optimal solution for a complete redundancy through the use of 2 disks and software raid. I’ve a fully redundant layout in my LEAP 42.3 installation and I’m trying to reproduce the same layout in 15.2. Theory says two approach are possible: efi on a raid1 partition or two efi partitions manually aligned each time it is necessary. In LEAP 42.3 only the first approach was successfully, the second one suffers of a bug that make it useless (https://bugzilla.suse.com/show_bug.cgi?id=1059169)
In LEAP 15.2 i tried to reproduce the same strategy I adopted for LEAP 42.3 but I encountered a problem that I’m still investigating (https://bugzilla.suse.com/show_bug.cgi?id=1179981, https://forums.opensuse.org/showthread.php/547946-grub2-install-error-with-efi-on-raid1-(error-at-boot-quot-Verification-requested-but-nobody-cares-quot-) , please note that i cannot enable secureboot in bios)
So while investigating the problem I’ve just mentioned, I started experimenting the second approach to see if it is now, in LEAP 15.2, a possible way to obtain efi redundancy.
Unfortunately I discovered that LEAP 15.2 suffer the same problem I’ve had with LEAP 42.3: when the partition mounted on /boot/efi is missing, due to a failed disk, the bios can start the boot using the second efi partition (which I called /boot/efi2) but the system (dracut I suppose) is not able to complete the boot and for some reason complaints about the missing /boot/efi even if it is no more needed to complete the boot.
This is the disk layout:
vda 253:0 0 20G 0 disk
├─vda1 253:1 0 500M 0 part /boot/efi
├─vda2 253:2 0 1G 0 part
│ └─md0 9:0 0 1024M 0 raid1 /boot
├─vda3 253:3 0 1G 0 part
│ └─md1 9:1 0 1024M 0 raid1
│ └─cr_swap 254:4 0 1024M 0 crypt [SWAP]
├─vda4 253:4 0 15G 0 part
│ └─md2 9:2 0 15G 0 raid1
│ ├─sysVG-rootLV 254:0 0 10G 0 lvm /
│ ├─sysVG-privateLV 254:1 0 480M 0 lvm
│ │ └─cr_sysVG-privateLV 254:5 0 478M 0 crypt /private
│ ├─sysVG-varLV 254:2 0 1.4G 0 lvm /var
│ └─sysVG-homeLV 254:3 0 1G 0 lvm /home
└─vda5 253:5 0 1G 0 part /vm1
vdb 253:16 0 20G 0 disk
├─vdb1 253:17 0 500M 0 part /boot/efi2
├─vdb2 253:18 0 1G 0 part
│ └─md0 9:0 0 1024M 0 raid1 /boot
├─vdb3 253:19 0 1G 0 part
│ └─md1 9:1 0 1024M 0 raid1
│ └─cr_swap 254:4 0 1024M 0 crypt [SWAP]
├─vdb4 253:20 0 15G 0 part
│ └─md2 9:2 0 15G 0 raid1
│ ├─sysVG-rootLV 254:0 0 10G 0 lvm /
│ ├─sysVG-privateLV 254:1 0 480M 0 lvm
│ │ └─cr_sysVG-privateLV 254:5 0 478M 0 crypt /private
│ ├─sysVG-varLV 254:2 0 1.4G 0 lvm /var
│ └─sysVG-homeLV 254:3 0 1G 0 lvm /home
└─vdb5 253:21 0 1G 0 part /vm2
vdc 253:32 0 1G 0 disk
└─vdc1 253:33 0 1023M 0 part
when I simulate a failure on disk2 everything is working fine, when I simulate a failure on disk1 dracut drops me into the emergency shell. Please note that both efi and efi2 mount point have the nofail option in fstab.
I opened a bug with all the details: https://bugzilla.suse.com/show_bug.cgi?id=1180383
Any ideas on how can i workaround this issue? Alternatively, are there different approach to obtain efi redundancy?
Thank you in advance!