Linux Raid not fully pieced together by LEAP 15 Installer

I’m replacing my system disks with a new set, and have a set of 4 other HDDs in RAID 5 that looked like this:

/dev/md127:           Version : 1.0
     Creation Time : Sun Apr 24 16:08:10 2011
        Raid Level : raid5
        Array Size : [11721049344](tel:11721049344) (11178.06 GiB 12002.35 GB)
     Used Dev Size : [3907016448](tel:3907016448) (3726.02 GiB 4000.78 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Sep 28 22:28:59 2018
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 128K

Consistency Policy : bitmap

              Name : linux.site:3
              UUID : 7125fc77:da388512:e0740464:4af1e734
            Events : 63789

    Number   Major   Minor   RaidDevice State
       6       8       33        0      active sync   /dev/sdc1
       7       8       81        1      active sync   /dev/sdf1
       5       8       49        2      active sync   /dev/sdd1        4       8       65        3      active sync   /dev/sde1

The RAID seems to be recognized by the LEAP 15 installer when I get to the partitioner… but it doesn’t seem to know which 4 disks makes up the raid or how they were assembled. Can I safely install the system on the two other new drives and then reassemble this raid correctly? Or is it lost now that I removed the old system drives? I saved the original fstab if that helps at all.

I do not understand this. Show photo of screen where it is visible and explain what’s wrong on this screen. Also describe step by step what you selected in partitioner.

Can I safely install the system on the two other new drives and then reassemble this raid correctly?

According yo your output this array is using persistent suportbock so should be recognized by any Linux. As long as you are careful to not overwrite these disks.

I saved the original fstab if that helps at all.

No, fstab has nothing to do with Linux MD.

Here’s a series of photos from the partitioner:

Partitions Total View: SUSE Paste
RAID Details: SUSE Paste
Raid Detected Drives: SUSE Paste
Partition Tree 1 of 2: SUSE Paste
Partition Tree 2 of 2: SUSE Paste

The default suggested partitioning was no good so I selected to go to advanced partitioning and import the existing partitions. As you can see in the pictures, the 4 drives that make up the Raid 5 are detected as Linux Raid partitions, and the raid itself is detected as linux.site:3 but it doesn’t seem to know that sdc, sdd, sde, and sdf, are what make up the Raid (as it also was with th previous install). You can see that it keeps them separate, and has them as an option to build a raid but not to say they are part of the linux.site:3 raid. So I’m not sure what to do from here.

What other files lies would help me resolve this from the old disk, if any? I replaced sda and sdb, but the old disk is in these screens as sdg and I have it being reformatted and mounted as a data_tmp folder and swap space. I could just forego the reformat at the moment and pull the files from the drive, or boot from it. No harm has been done yet. :slight_smile:

Any thoughts always appreciated. Thanks!

Those two look pretty good.

Raid Detected Drives: SUSE Paste
Well, it does look like it thinks these partitions are not in use. You may consider opening bug report for this.

So I’m not sure what to do from here.
If you are concerned, unplug RAID disks, install new version, plug disks back and import RAID.
What other files lies would help me resolve this from the old disk, if any?
No files are really needed. RAID is autodetected; each device is scanned and incrementally added to RAID array, when all (or at least enough) devices appeared, RAID is started. You can control which arrays are started automatically using /etc/mdadm.conf, but it can always be regenerated if needed. You may want to have a copy for reference.

Picture “Raid Detected Drives: https://paste.opensuse.org/51252248” says a Raid 0, but in your opening post you wrote Raid 5.
Which type do you want to use?

This is the RAID creator screen in the installer. It has the options of what Raid to build at the top (currently Raid 0 is default selected) but more importantly it should be showing you only drives that you could build a new Raid from. So in the instance here, it’s showing the 4 drives that should have an existing Raid 5, but doesn’t think they are utilized anywhere even though it detects a Raid called linux.site:3 (but doesn’t know how they’re pieced together).

Here’s an update on where I’m at now with this effort: Thanks for the help arvidjaar; shutting down the installer, disconnecting the old OS drive (sdg) and then starting the installer again cleared all this up. Seems as though the partitioning information the installer was grabbing from the old drive (since it hasn’t been formatted yet - and was from a Raid 1 config) was really messing it up. I don’t think it should have been, so there’s probably a bug or some logic it can’t work through there.

Long story short, I now have the new OS installed, and it recognizes the Raid 5 drives correctly. Thanks for the help! :good: