Migrating from Fedora 30, LVM RAID 5 Issues

I am trying to move my daily driver over to openSUSE Tumbleweed from Fedora 30. The installer is getting an error reading my drive confiurations. Specifically, I have a RAID-5 array defined via LVM and formatted XFS + 500GB SSD cache drive that the nstalleris having trouble reading. What can I do to remedy this?

/ = 1TB Nvme Drive (ext4)
|/home = 4TB HDD (ext4)
/mnt/data = 6TB LVM RAID-5 + 500GB SSD cache drive (xfs)
|__/mnt/raid5 = 8TB External HW RAID-5 (ext4)

/ = 1TB Nvme Drive (btrfs)
|/home = 4TB HDD (ext4)
/mnt/data = 6TB LVM RAID-5 + 500GB SSD cache drive (xfs)
|__/mnt/raid5 = 8TB External HW RAID-5 (ext4)

Just a thought, since I use RAID1 only, and don’t use LVM. I know the installer was recently rewritten and has issues remaining to address, particularly on the subject of RAID. I suggest to limit installation involvement to the SSD. After installation the RAID and LVM can be incorporated manually if YaST proves incapable. Alternatively, install a minimal 42.3 with LVM and RAID5, then do an online upgrade. TW experience has made online upgrades highly reliable when instructions are followed and limitations understood.

Thank you. I will give that a try.

I’d expect all sorts of possible gotchas for what you are likely trying to do…
You might want to get on one of the btrfs lists and ask some specific question…

Main question in my mind would have been conversion of LVM volumes to btrfs volumes.
Apparently it’s likely possible, although in the following discussion thread it originally didn’t seem to work and then the guy got a pleasant surprise (I’ve done the same with regular partitions… when the the partition wall was damaged or removed, you can replace it exactly and all the data is immediately accessible again)


But, read the important followups in the above thread… which may make you want to do a restore from backup instead of a conversion, you’d possibly lose certain desirable features like compressed metadata.

I’m not even sure how I’d approach converting from another distro like Fedora to openSUSE…
If you do so, then how you do it could be important.


I don’t want to do that. BTRFS has a write hole with RAID-5/6 and is ustable in those configurations. No thanks. I just want to mount my LVM RAID-5 (XFS) + Cache drive on an openSUSE install.

So I migrated. Went pretty well except that the error persists even after installation. Yes, I can acccess the array and the cache drive is working, but yast (both gui and ncurses) borks at the configuration and parts of it don’t work (like the boot loader editor).

The error is

Unknown device "/dev/raid5vg/[raid5lv_corig]": No such file or directory 




/dev/mapper/raid5vg-raid5lv_corig_rimage_0: UUID="53ab0da0-576e-4ee9-a466-573c20f5ca4c" TYPE="xfs"
/dev/mapper/raid5vg-raid5lv_corig_rimage_2: PTTYPE="atari"
/dev/mapper/raid5vg-raid5lv_corig_rimage_3: PTTYPE="atari"
/dev/mapper/raid5vg-raid5lv: UUID="53ab0da0-576e-4ee9-a466-573c20f5ca4c" TYPE="xfs"

It’s that raid5vg-raid5lv_corig that is causing the problems.

As the YaST partitioning software was rewritten not long ago, it could be that the devs are eagerly awaiting any reports. Thus filing a bug report might be useful.

Does it make sense to set up a disk with separate LVM and BTRFS volumes?
That probably requires some research.
And, if it’s not documented anywhere, then maybe something that should be experimented.

In openSUSE 15.1, The entire disk is configured as a single BTRFS volume with sub-volumes for various mount points.
Prior to 15.1, openSUSE configured a BTRFS volume and a separate partition not part of the BTRFS volume for /home.

The more I think about this, I suppose there should not be a problem, you may even be able to use the YaST LVM module(I’m not talking about the Partitioner) to manage both… At least I’ve noticed that no matter whether you install LVM or BTRFS, the same module is supposed to support both seamlessly so I’d expect that could also mean simultaneously. But, I don’t know about command line tools, at least on its face different tools are provided to manage each type of volume.

If you go forward with this, please report your findings… It’d be useful to anyone else considering the same.


Bug 1139492 has been submitted regarding this.

The LVM consist of 5 drives: 4x2TB HDD + 1x500GB SSD configured as a RAID-5 and cache drive respectively. The resulting drivespace is formatted XFS. The combo is supposed to be system portable as the underlying technologies are baked into every kernel. This configuration was created using Fedora 30 and worked without errors (at least I didn’t see any). openSUSE seem a bit more finicky.