Tumbleweed looses RAID configuration after reboot

Fresh installed Tumbleweed looses my RAID configuration after reboot of system.

  • Backup RAID array to separate disk
  • Restart system to re-install
  • 2 Disks are configured in RAID1 during installation (/dev/sda and /dev/sdb)
  • Disk is formatted XFS with encryption
  • Mount point is set to /data
  • Boot system after installation
  • /cat/proc/mdstat reports status of RAID
  • Transfer all data back to RAID device
  • Install package after fresh install
  • Reboot
  • cat: /proc/mdstat: No such file or directory
  • Partitioner in YAST2 only reports 1 disk to be mounted as /data
  • No RAID array is shown in Yast
  • Adding RAID array fails “not enough suitable devices”

sudo lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
sdb 8:16 0 931.5G 0 disk
└─cr_data 254:1 0 931.5G 0 crypt /data

cat /etc/mdadm.conf

DEVICE containers partitions
ARRAY /dev/md0 UUID=14fbead6:3304801a:ead6cb97:07d808d0
ARRAY /dev/md0 UUID=972654a6:fd4d9f5e:6a6f1176:2f47cad5

sudo mdadm -A -s

mdadm: Devices UUID-14fbead6:3304801a:ead6cb97:07d808d0 and UUID-972654a6:fd4d9f5e:6a6f1176:2f47cad5 have the same name: /dev/md0
mdadm: Duplicate MD device names in conf file were found.

Been poking at this for some time now and I don’t see what is wrong with the configuration.

Update

  • Ran 15.5 live image, RAID array is recognized
  • Array was synced in the live image session and completed successfully
  • Rebooted back to normal Tumbleweed installation
  • Boot failed
  • Boot dropped to maintenance terminal

systemctl status data.mount

× data.mount - /data
Loaded: loaded (/etc/fstab; generated)
Active: failed (Result: exit-code) since Fri 2023-07-14 19:33:17 CST; 1min 10s ago
Where: /data
What: /dev/mapper/cr_data
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
CPU: 3ms

Jul 14 19:33:17 localhost systemd[1]: Mounting /data…
Jul 14 19:33:17 localhost mount[1082]: mount: /data: wrong fs type, bad option, bad superblock on /dev/mapper/cr_data, missing codepage or helper program, or other error.
Jul 14 19:33:17 localhost mount[1082]: dmesg(1) may have more information after failed mount system call.
Jul 14 19:33:17 localhost systemd[1]: data.mount: Mount process exited, code=exited, status=32/n/a
Jul 14 19:33:17 localhost systemd[1]: data.mount: Failed with result ‘exit-code’.
Jul 14 19:33:17 localhost systemd[1]: Failed to mount /data.

  • Ran xfs_repair on the md array
  • xfs_repair repaired a node
  • Reboot into Tumbleweed
  • RAID array is not recognized.

mdadm -D /dev/dm-0
mdadm: /dev/dm-0 does not appear to be an md device

  • While this array just got a repaired status with xfs_repair

Resolved by the fix in this bug report: 1213227 – Recent update broke boot for me, raid not assembled, probably initramfs issue, very custom disk/partition layout