Howto EFI boot with Raid-1

My Software RAID 1 was working fine as Long as the two disks were 2TB each.
Upgrading to 4TB disks turned into a nightmare.

I need to use GPT of course with these disks.
The straightforward Approach

  • remove one of the old disks and build in the new one
  • create the partitions with FD00 id (RAID)
  • mdadm /dev/md… -a /dev/sd… and let the system do a resync of the mirror
    worked fine overnight.

Then let grub2 write the bootsector of the new disk (yast2 bootloader, enable redundancy for MD array) works fine as well.
Take out the second “old” disk and have the computer restarted.
IT DOES NOT BOOT!

OK, I found that I need a partition /boot/efi and all that stuff.
Now it works with:
GPT fdisk (gdisk) version 0.8.7Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 62C8ACD2-3E46-47F2-B99A-B68116C7D645
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 10658669 sectors (5.1 GiB)
Number Start (sector) End (sector) Size Code Name
1 2048 321535 156.0 MiB EF00 Primary
2 321536 8722431 4.0 GiB FD00 Primary
3 8722432 5849876479 2.7 TiB FD00 Primary
Same for /dev/sdb

/dev/sda1 & /dev/sdb1/ form /dev/md1 serving as swap.
/dev/sda2 & /dev/sdb2/ form /dev/md0 mounted as /.

cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sdb2[2] sda2[0]
4192192 blocks super 1.0 [2/2] [UU]
resync=DELAYED
bitmap: 0/1 pages [0KB], 65536KB chunk
md0 : active raid1 sdb3[2] sda3[0]
2920576832 blocks super 1.0 [2/1] [U_]
>…] recovery = 0.6% (18742016/2920576832) finish=2903.9min speed=16654K/sec
bitmap: 22/22 pages [88KB], 65536KB chunk

(yes, it is resycing at the Moment, no worries).

What still bothers me a lot:

  1. The /boot/efi (/dev/sda1) is not “raided” - I had to copy it to /dev/sdb1 doing a “dd”.
  2. GRUB issues a message “Perl-Bootloader: 2014-04-01 18:16:43 <3> yast-0309.1 MBRTools::examineMBR.189: Error: Examine MBR cannot open /dev/md” on the command line while you start up “yast2 bootloader”.
    No clue what this wants me to tell. Finally there is no MBR with GPT / EFI boot.
  3. There is not way without re-installing from scratch to use larger disks beyond the 2TB if you did use MBR before.
  4. YASTs Support for RAID with GPT is lousy. It is missing a proposal for the RAID partitioning scheme completely.

Anybody with a solution for 1) ?
Anybody with an idea for 2) ?

I have zero experience with RAID.

It is possible to do MBR booting with GPT, and grub installed in the MBR (the protective MBR). You need to create a BIOS Boot partition (“gdisk” type code is EF02, as I recall). It is typically created to use sectors 34-2047.

The issue IS NOT if to EFI boot or to MBR boot.
The issue IS that you need the EF02-type partition (BIOS) or the EF00-type Partition (EFI) which both cannot be “raided”. And one like these you need in any case, EFI or MBR boot.

If you have “zero experience with RAID” why waste your, my and our time with your posting?
Sorry, I do not want to be rude, but it is annoying to get replies wthout any substance.

Sorry, but when you do not want to be rude, then you better do nothing that is even near to it.

All people here try to help as good as they can. Sometimes this is more straightforward then at other opportunities, but it is never to spoil your time. People try to gather information and to give information in the hope that a common action might be usefull.

And about spoiling time. You spoil our time by not even telling in the first place which version of openSUSE you use.

Hi
What I did on one system of mine was boot from an SD card with just /boot and leave the system as mbr with the raided disks gpt, then set the system to boot via the sd card. You could also use a usb device instead, either method adds minimal delay to the system booting.

You have got a point there.
As you can see from my location English is not my mother language.
I might have lacked to notice that something which is absolutely political correct if expressed in German may have a tendency to rudeness if expressed in another language. I promise to care about that from now on.

With the two posts here (I am a newcomer to the forum) my experience is, however, that I did not get any reply of some use in spite of the fact that there had been immediate responses.

I could not agree more with your statement that “All people here try to help as good as they can.”
From the major part of the threats here an impression comes up, however, that many posters are extremely superficial, some seem not to have read what was posted or did not have understood.

A reply with the line “zero knowledge about RAID” to a threat which is asking a question about RAID is self-explanatory. (Hope this is now political correct and not rude again).

BTW: I have solved all the problems laid down at the start of my thread in the mean time. Should any one be interested in the solution please let me know.

My system is an openSuSE 13.10 with latest patch levels (13.10.7.10).

It is common that if you find a solution to a problem you post it so others that may run into the problem can benefit. That is what the community is about.

OK, here is my solution as for the Moment.

  1. Install EFI Partition on /dev/sda1 to be at Mount Point /boot/efi. (This is the Standard for EFI boot). Standard size proposed by openSuSE is 156.xxMB.
  2. Create same size FAT Partition on /dev/sdb1, do not mount that one (/dev/sdb1).
  3. Install 2 or 4GB partitions of type LINUX RAID on both for swap (larger swap is not recommendable) in /dev/sda2 and /dev/sdb2.
  4. Install LINUX RAID Partitions on the remaining space of your disks in /dev/sda2 and /dev/sdb2. (I do not use a separate /home. If you want that create several RAID partitions).
  5. Assemble the RAID partitions /dev/sda3 and /dev/sdb3 into /dev/md1 RAID Array, file system EXT4 (I have not checked with BTRFS yet) at mount point /.
  6. Assemble the RAID partitions /dev/sda2 and /dev/sdb2 into /dev/md2 RAID Array, file System SWAP.
  7. Install your System and boot it.
  8. “dd” /dev/sda1 to /dev/sdb1.
  9. gdisk /dev/sdb
  10. “i” for Details on /Partition no. 1
  11. copy /dev/sdb Partition GUID code
  12. efibootmgr --create --disk /dev/sdb --part 1 --label “openSuSE /dev/sdb” --loader \EFI\opensuse\grubx64.efi](file://\EFI\opensuse\grubx64.efi) (add /dev/sdb1 to EFI boot)
  13. Swap /dev/sda and /dev/sdb physically
  14. reboot
  15. have yast bootloader rewrite the EFI bootloader to correct the UID of the disk passed to the kernel.
  16. Swap disks back or BETTER Exchange the boot sequence with efibootmgr.

Now your System will EFI boot even after removal of /dev/sda harddisk.

**I would appreciate if openSuSE could automize this into YAST in a future version, however.

**.

This results in duplicated filesystem UUID and LABEL. Not a big deal but confusing. “mkfs -t vfat; cp -a” would work just fine as well.

  1. efibootmgr --create --disk /dev/sdb --part 1 --label “openSuSE /dev/sdb” --loader \EFI\opensuse\grubx64.efi](file://\EFI\opensuse\grubx64.efi) (add /dev/sdb1 to EFI boot)

For non-secure boot case. For secure boot you need to add shim.efi.

  1. Swap /dev/sda and /dev/sdb physically
  2. reboot
  3. have yast bootloader rewrite the EFI bootloader to correct the UID of the disk passed to the kernel.

Please explain what you mean here. Your kernel gets passed /dev/md1 (or whatever refers to /dev/md1); what do you want to rewrite?

Now your System will EFI boot even after removal of /dev/sda harddisk.

You need to repeat copying content of /boot/efi/EFI/openSUSE to another partition every time grub2 or shim get updated.

**I would appreciate if openSuSE could automize this into YAST in a future version, however.
**
No amount of shouting on forum will do it. Open feature request or better implement it and submit patch.

  1. You are absolutely right regarding duplicate filesystem UUID. I forgot that after the “dd” “sgdisk -G /dev/sdb1” is needed to create a new (randomized) partition GUID. Your Approach, however, is much better than mine. THANKS!

  2. I have not tried any secure boot yet - I am not using Windows… But your hint is very much appreciated.

**3) Sorry for my stupidness about the kernel parameter. I thought it would pass a reference to /dev/sda which is actually not the case. All the disk swap actions are obsolete because (as you have clarified pefectly) the parameter to the kernel is referening /dev/m1. So this can remain identical on both disks.

**Thanks for your explanation about “shouting in the Forum”, “feature request” etc. Unfortunately I am not familiar with either the programming language YAST is written in nor with the YAST implementation itself. So I am afraid I have very little chance to implement the feature by myself. But let me think about it.

My friends call my by name, others call me by value.