grub2-install error with efi on raid1 (error at boot: "Verification requested but nobody cares")

on a freshly installed openSUSE LEAP 15.2 i performed a full update and i ended up with an unbootable system.
At boot grub stops with the message: “error: verification requested but nobody cares”

I searched in the forum and on bugzilla and I immediately found these:

But the proposed patch openSUSE-2020-1357 was already installed (I perfomed a full zypper patch), so there should be something else.

I tried to manually launch grub2-install --verbose and i found this:

grub2-install: info: copying `/boot/grub2/x86_64-efi/core.efi' -> `/boot/efi/EFI/opensuse/grubx64.efi'.
grub2-install: info: Registering with EFI: distributor = `opensuse', path = `\EFI\opensuse\grubx64.efi', ESP at mduuid/9182c46b9d469f79b48850b68f3371a5.
grub2-install: info: executing efibootmgr --version </dev/null >/dev/null.
grub2-install: info: executing modprobe -q efivars.
grub2-install: info: executing efibootmgr -c -d.
efibootmgr: option requires an argument -- 'd'
efibootmgr version 14
usage: efibootmgr [options]
        -a | --active         sets bootnum active
        -A | --inactive       sets bootnum inactive
        -b | --bootnum XXXX   modify BootXXXX (hex)
        -B | --delete-bootnum delete bootnum (specified with -b)
             --delete         delete entry by bootnum (-b), by UUID (-P)
                              or by disk+partition+file] (-d -p -l)
        -c | --create         create new variable bootnum and add to bootorder
        -C | --create-only      create new variable bootnum and do not

My guess is that efibootmgr is missing the disk parameter. This could be caused by my particular disks layout in which the efi partition is on raid1 (which is a supported configuration although in the past i occasionally have had some problems)

this is my disks layout:

vda                        253:0    0   20G  0 disk  
├─vda1                     253:1    0    1G  0 part  
│ └─md0                      9:0    0 1024M  0 raid1 /boot/efi
├─vda2                     253:2    0    1G  0 part  
│ └─md1                      9:1    0 1024M  0 raid1 /boot
├─vda3                     253:3    0    1G  0 part  
│ └─md2                      9:2    0 1024M  0 raid1 
│   └─cr_swap              254:4    0 1024M  0 crypt [SWAP]
├─vda4                     253:4    0   16G  0 part  
│ └─md3                      9:3    0   16G  0 raid1 
│   ├─sysVG-rootLV         254:0    0   11G  0 lvm   /
│   ├─sysVG-varLV          254:1    0  1.1G  0 lvm   /var
│   ├─sysVG-privateLV      254:2    0    1G  0 lvm   
│   │ └─cr_sysVG-privateLV 254:5    0 1022M  0 crypt /private
│   └─sysVG-homeLV         254:3    0    1G  0 lvm   /home
└─vda5                     253:5    0 1023M  0 part  /vmdisk1
vdb                        253:16   0   20G  0 disk  
├─vdb1                     253:17   0    1G  0 part  
│ └─md0                      9:0    0 1024M  0 raid1 /boot/efi
├─vdb2                     253:18   0    1G  0 part  
│ └─md1                      9:1    0 1024M  0 raid1 /boot
├─vdb3                     253:19   0    1G  0 part  
│ └─md2                      9:2    0 1024M  0 raid1 
│   └─cr_swap              254:4    0 1024M  0 crypt [SWAP]
├─vdb4                     253:20   0   16G  0 part  
│ └─md3                      9:3    0   16G  0 raid1 
│   ├─sysVG-rootLV         254:0    0   11G  0 lvm   /
│   ├─sysVG-varLV          254:1    0  1.1G  0 lvm   /var
│   ├─sysVG-privateLV      254:2    0    1G  0 lvm   
│   │ └─cr_sysVG-privateLV 254:5    0 1022M  0 crypt /private
│   └─sysVG-homeLV         254:3    0    1G  0 lvm   /home
└─vdb5                     253:21   0 1023M  0 part  /vmdisk2

Please note that this layout was perfectly supported with openSUSE 14.3 and was perfectly supported during installation of 15.2. The problem comes out only after the update .

As a workaround i tried to manually launch grub2-install with several options combination and this one seems to have solved the problem, but
i do not understand why

Any ideas? I’m going to open a bug for this.

Thanks in advance

What file system?? BTRFS does not like boot to be on a different partition

I don’t use RAID, so this might not help.

The problem originally occurred after some updates, but was fixed with further updates. You might have an in-between system.

When booting, hit the ‘e’ key on the grub menu line. Scroll down until you find a line that starts with “linux”. If it already uses “linuxefi” then ignore my reply. But if it is using “linux” but not “linuxefi” then change that “linux” to “linuxefi”. Then scroll down a line or two and change “initrd” to “initrdefi”. Then CTRL-X to boot.

If that works, then you need the line


in “/etc/default/grub”. It is probably there, but either commented out or with “false”. After fixing that, use:

grub2-mkconfig -o /boot/grub2/grub.cfg

to rebuild the boot menu.

Not only that, but command line is truncated because grub could not determine disk name and left it as NULL which was interpreted as end of argument list. Which is bug by itself (not checking for NULL before using it).

the efi partition is on raid1 (which is a supported configuration

As you have seen this is not going to work in general case. There is exactly one possible way how it may work, I do not know if documentation means that or it is just leftover.

Any ideas?

To use MD RAID1 for ESP you need

  1. Use metadata 1.0 (or 0.9 but this is obsolete). This is the only metadata format that is possible. YaST partitioner seems to default to 1.0 format so it is probably OK (as long as defaults are not changed).
  2. Firmware does not know anything about MD RAID1 (unless you install drivers) so it will see two individual partitions. Each partition should
    have correct partition type for ESP (actually must given next consideration). But YaST partitioner cannot create RAID unless partitions have RAID partition type. So you cannot really install in this configuration. 1. Skip updating NVRAM boot entries during grub-install (as you have seen it does not work anyway). It means you must use either “grub-install --removable” (which copies bootloader into \EFI\Boot) or “grub-install --no-nvram” (which skips efibootmgr call, but then you need at least once create necessary entries manually). Again, neither is possible during installation. And to use \EFI\Boot you usually need to setup your system to “boot from UEFI hard disk” (exact setup options vary) and for this to work you must have ESP with correct partition type on your disk. See previous point. At least Leap 15.2 does not allow you to explicitly disable NVRAM updates.

By default openSUSE is using signed bootloader with fixed grub.cfg which redirects to /boot/grub2/grub.cfg which means - as you need to perform manual setup anyway - it is probably more simple to copy content of ESP on another partition and update it periodically.

I’m going to open a bug for this.

For what exactly? There is only one real bug here - grub should check earlier that it has valid disk that can be used to register EFI boot entry and fail with meaningful message. Otherwise full support for ESP on RAID1 requires cooperation between multiple packages and is certainly out of scope for maintenance update.

And I do not think supporting it is possible at all. Because firmware works with individual partitions and ESP is writable from firmware you always risk that content of partitions differ without Linux MD driver knowing it. This could lead to arbitrary corruption later. It is well known issue I have seen not rarely in other operating systems. So I personally think that if this should be supported, then as two separate partitions which are updated together (by grub-install or higher level tools like update-bootlader).

All file systems are ext4

Thank you very much for your quick answer. Yes Ialready checked this and linuxefi and initrdefi are already present.
I also have to say that secure boot is disabled.
I’m running inside a kvm virtual machine with ovmf firmware

  1. I confirm metadata is 1.0
  2. that’s not always true. Ovmf firmware ignores partition type and simply searches for a fat partition with the EFI directory. My physical hardware does exactly the same (I actually run openSUSE LEAP 42.3 whith the same disks layout)
  3. Your analysis is certainly true, but prior to some unidentified update this configuration was perfectly working.

The bug is for grub2-install or some other package that “looses” the capability to detect efi partition is on raid and instructs efibootmgr (which should be called twice once for /dev/dva and once for /dev/vdb)
Why do you think support is impossible? If noone else is writing to your esp partitions, EFI on raid is possible. I’ve used for years. Yes I have had some problems in the past but always due to some packages update that forgot to consider this layout!

Because i’m planning a migration from 42.3 to 15.2 I would like to also have a plan b, It seems that you are suggesting me to create two different ESP partitions and to keep them in sync manually. Is it right? I will try, but in the past I have had problem with this layout because of systemd/initrd that refuse to continue boot if the efi partition is missing also if i put the nofail options inside fstab. I’ll give this configuration a try and i let you know.

Thank you very much for your precious suggestions.

I opened a bug for this:

I tested installation of RTM 15.2 (without using update repository) in RAID1 setup. It worked. Some comments

  1. Installer allows creating MD RAID on partitions of type Linux and Linux Swap, but does not allow (it does not offer these partitions) on partitions of type EFI System Partition. As I already mentioned, this is wrong - we need to preserve original partition type for maximum compatibility. This would be valid bug report.
  2. Default openSUSE install is using shim as primary bootloader and shim-install to install it. shim-install does exactly what I already suggested - it explicitly detects MD RAID and calls efibootmgr for both individual partitions, creating two entries. This is just one step away from simply copying the same content to separate partitions. Also I am not sure what will happen in case of IMSM where RAID is handled as single device. In the worst case this will result in two identical firmware boot entries.

So documentation is correct. In default setup openSUSE does support ESP on RAID1. But it does not mean that every possible setup will also work. In particular, grub-install never worked in such case.

It is enough if there is one firmware implementation in the wild that cannot handle such setup.

The bug is for grub2-install or some other package that “looses” the capability to detect efi partition is on raid and instructs efibootmgr (which should be called twice once for /dev/dva and once for /dev/vdb)

grub2-install never supported it in the first place. Just try to disable secure boot support in YaST bootloader module. It is user error - you should not be using grub2-install in this case because it does not match your bootloader configuration.

Why do you think support is impossible? If noone else is writing to your esp partitions, EFI on raid is possible.

Exactly because of “if” in your second sentence. Because distribution vendor has no way to ensure this cannot happen.

I’ve used for years.

Because next time user has problems and asks on forums someone suggests using EFI shell to troubleshoot and write output to log file on ESP. Just like on these forums there are a lot of suggestions to use grub2-install without any attempt to actually understand user configuration and whether this command is appropriate at all (hint - it is not in your case).

So if you understand what you are doing and are sure you can avoid human errors - by all means, do what you like. But offering average users gun to shoot themselves in the foot is irresponsible. But I do not think anyone in openSUSE will listen anyway.

First of all thank you very much for your time!

I did not known shim-install at all. I thought grub2-install was doing all the job, installing shim when necessary. I double checked and i discovered that In my first test secureboot was disabled in the firmware, but enabled in yast But i’m almost sure that after having applied the pacth, grub2-install was called, not shim-install, perhaps because secureboot was not detected. It’s strange because shim is supposed to work with both secureboot enabled or disabled and grub update should trigger the right bootloader installation (so shim-install in this case).

I’m going to do some other tests with different configurations to see what happens, thank you for the hint.

No, the command that is doing all the job is update-bootloader. It is high level tool that handles all supported bootloaders in (open)SUSE according to current configuration set by YaST. It will call shim-install or grub-install as needed (and it called lilo/elilo/… in the past as needed as well). But it does not matter because every time user has an issue someone comes and tells to use grub-install because someone did it once or just found this on Internet. I lost hope to explain that this is not correct on (open)SUSE - or, better, running update-bootloader is going to be the least harmful by at least respecting your actual current configuration.

Alternative is usual advice to go in YaST bootloader, change something (like timeout) and save which will at the end call the same update-bootloader.

Thank you very much for the clarification! I was one of the users who fell into the grub-install trap :slight_smile:

Of course update-bootloader ends with the same error when it tries to call efibootbmgr without the proper disks parameter.
With the aim to find a valid strategy for my layout, I’m wondering if efibootmgr call is the last job update-bootloader tries to do or if there are other steps which are aborted due to efibootmgr fail.
I’m asking this because a good strategy could be to simply manually invoke efibootmgr twice (once per disk) after every update-bootloader invocation (after kernel/grub packages update for example).
What do you think?

Now i’m planning to complete tests with efi on raid and then to start another strategy with two efi partitions manually synced. In the past i have had problem with this configuration: when the disk with the /boot/efi partition is missing systemd/dracut refuse to complete boot even if the efi partition has already been “used” in the early phase of the boot process.
This was the bug I opened against leap 42.3, but I had not enough time to follow it before the EOL.

Thank you again!

PS sorry, my english is not very good.

You started with the statement that you installed Leap 15.2 using ESP on RAID1. This works only if openSUSE is configured for secure boot, i.e. it is using shim as primary bootloader. And in this case update-bootloader works because it calls shim-install.

If update-bootloader fails now it means it calls grub2-install directly which means you changed bootloader configuration. Show your /etc/sysconfig/bootloader and /etc/default/grub.

My first test was with secureboot disabled in bios, but enabled in bootloader configuration (yes, incoherent it was a mistake). With this configuration the system was able to boot after installation, but not after an upgrade (which has triggered an update-bootloader) I deleted the vm so a i can’t show the conf files.

My second test was with secureboot disabled in both bios and bootloader configuration. With this configuration the system was able to boot after installation, and seems able to boot also after the ugrade, but the efibootmgr error was still present. I suppose everything was working because update-bootloader has changed the grubx86.efi file and the old entry in the uefi firmware still point to that file (which is the only file needed, when shim is not used I think).

Those are /etc/sysconfig/bootloader and /etc/default/grub in my second test (commented parts are omitted):




GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/mapper/cr_swap mitigations=auto quiet"




I’m going to systematically repeat all the tests with both secureboot enabled and disabled (but i need disabled in my configuration, to be able to use nvidia driver).

Thank you very much.

That won’t work. It means use grub2-install and not shim-install.

No, that is not at all incoherent. That allows you to turn secure-boot on or off in the BIOS with causing problems with booting.

I performed some more tests:

with secureboot enabled both in bios and in booloader configuration: everything is working fine. efibootmgr is called twice, once per disks with the correct parameters.

with secureboot disabled in both bios and in bootloader configuration: the installer is not able to set efivars and the system is un bootable at the end of the installation. If you are able to manually set efivars you can circumvent the problem. update-bootloader is not able to manage this configuration

with secureboot disabled in bios but enabled in bootloader configuration: the installer is able to create a bootable system but it becames unbootable after the first kernel/grub update. update-bootloader is not able to manage this configuration.

I updated the bug with this additional info. In my opinion this is a regression from previous versions of LEAP, but unfortunately developers are not considering this a priority and the bug is still unassigned.

That is rather vague. Firmware fails to start bootloader? Bootloader fails to start kernel? Kernel fails to boot? Kernel boots but GUI does not load? Each of these cases needs different steps to debug.

update-bootloader is not able to manage this configuration.

update-bootloader calls /usr/lib/bootloader/grub2-efi/install that should call /usr/sbin/shim-install. There is no place where either of them checks firmware secure boot state. Both are shell scripts, add “set -x” as the first command and post output of “update-bootloader --reinit”.

I cannot reproduce it. I installed 15.2 in VM with ESP on RAID1 and secure boot disabled, then updated it and it booted just fine.

It stops in the grub stage with the error I described in the first post (“Verification requested but nobody cares”)

Thank you very much for the test and your precious help! May I ask you which virtualization system have you used? If you perform an update-bootloader through yast do you receive any error?

By the way now i’m going to rebuild the test environment with bootloader with secureboot enabled, but secureboot disabled in bios and I’ll post the information you asked.