Replaced motherboard, now can't mount drive or RAID unless formatted

Replaced a faulty MSI motherboard (with a Gigabyte GA-880GM-UD2 Ver. 1.3, to keep a floppy drive capability). Failed to find the combination of SATA connections (SATA ports numbered differently, 0-4 replacing 1-6 on the dead MSI board) that would boot properly from 15.1 on sdb. Started a fresh install of LEAP 15.1. At the partitioning stage, the process doesn’t allow me to mount the RAID drives (sda and sdc) unless formatted first. Likewise, the installer won’t accept the /home partition on sdb without partitioning. This is not the experience I’ve had previously where /home partitions were accepted without formatting. Since the whole point of saving this computer with a new MB was to preserve the data on RAID drives (simple duplicated content) I can’t reformat. This RAID array was created under LEAP 15.0 on the previous M/B using the RAID setup in the partitioning step of the LEAP installer. Is the order of the drives on the SATA ports significant? Should RAID be on consecutive SATA ports? The BIOS screens say use SATA 4/5 for RAID, which I translate into SATA port 3/4 when numbering starts at 0 (and 3/4 are the highest numbered ports on the board). The Gigabyte manual describes configuration of RAID in BIOS. I fear that approach would lead to formatting and loss of data. Any insights? If RAID setup fails, any hints on mounting a single disk from the RAID pair to recover data?

You could install Leap 15.1 or even 15.0 on a USB stick (leaving the two disks untouched) and boot from there.
Then you have KDE (which you use anyway) along with all the command line tools, which probably will the most convenient way to try to mount partitions of one of the disks and see if that works and what you may find.

To install on USB stick, in the partitioner of the installer select expert mode and then “Start with existing partitions”, to delete the partition on the USB stick (probably Fat32 file system) and create root (ext4, 10-12GiB if you don’t install libreoffice, mount point /), home (2-3 GiB, mount point /home) and swap (1-2 GiB if at all, mount point swap) on it. Install GParted (probably in the online oss repo) things will be more comfortable.

Something doesn’t work well here.
I just installed 15.1 on a SSD without formatting the /home partition (only deleting the hidden files in /home beforehand) which worked perfect.
You write that /dev/sdb is where your /home is on with which you tried the same, while the two HDDs which formed the former RAID are /dev/sda and /dev/sdc.
I don’t know how it comes that you get this order of drives (sda, sdb, sdc).
And I don’t know why you have a disc /dev/sdb in between, while the two RAID disks are /dev/sda and /dev/sdc.

Anyway, another “stupid” idea, which may depend on how valuable/precious the data on the two former RAID disks is for you:
Try to buy the same motherboard that you had before second hand.
Plug the HDDs the same way as before, and set up the BIOS the same way.

Agree with @ratzi,
My personal experience with hardware RAID is that the easiest path to recovery is to use an identical RAID controller and drivers. It’s perhaps an argument for using card disk controllers and not controllers mounted on the system board.

Otherwise, you can try to mount a single drive(Assuming RAID 1) as a broken set… And, maybe as a data drive simply for data recovery and not as a running system.


It’s not clear to me whether or not this is an example of why BIOS RAID is recommended against by astute and experienced RAID users. Loose a motherboard, and loose the RAID content. Is BIOS RAID what you had?

With software RAID, that needn’t happen. Twice here I migrated existing RAIDs between different motherboards, none of which even supported BIOS RAID.

What happened to the old motherboard? If it’s from 13 or more years ago, it might be a bad capacitors victim. Caps can often be successfully replaced. See for details.

Another thought on that - sorry, I’m not working with RAIDs and may have been too quick on that.

Yes, then this RAID should have been a RAID 1, with identical data on both drives.

But if you possibly would like to use them in a RAID 1 in the same way again, then the contents of the two drives should remain identical, even if you try to mount partitions and look at the data there on only one of the two drives.
It is not the point of loosing data, if they really are of simple duplicated content. It’s about possibly changing something on only one of them, even if it may only be the access time of a file.

In this situation it is probably a good idea to mount any of those two drives read-only, if you consider using them together in a RAID 1 again afterwards.

Assume you have installed 15.1 on a USB stick and booted from there with one of the drives plugged and running.
Then in YaST > Partitioner click on “Hard Disks” in the left part on the window. Then in the right part of the window click on the partition of the drive you want to mount, so that it is highlighted (hopefully you can see the drive and its partitions then, if not … was it really a RAID 1?).
Click below on “Modify” and select “Edit Partition …”.
“Do not format device” should be pre-selected. Click “Mount device” and enter a Mount Point like e.g. “/mnt/RAID_drive_1_home”.
Click on “Fstab Options …”.
Select mounting by “Device ID”.
Then click

  • Mount Read-Only
  • Mountable by User
  • Do Not Mount at System Start-up
    In the last field “Arbitrary Option Value” enter “nofail”.

Click OK, then NEXT, NEXT, FINISH.

Then to your /etc/fstab a line like

/dev/disk/by-id/ata-ST2000NM0033-9ZM175_Z1X0TDA9-part8  /mnt/RAID_drive_1_home  ext4  ro,user,noauto,data=ordered,nofail  0  0

should have been added, in which “ro” stands for read-only. And in /mnt/ a folder or mount point ‘RAID_drive_1_home’ should have been created from which the corresponding partition can be accessed after beeing mounted. In KDE, which you use, mounting then is easy - and then takes place read-only.

Besides, how old are these two drives?
If they’re not configured as RAID but as single drives then you should be able to find out a bit more about their health using “smartctl” at the command line.
In order to do that it is not necessary to mount any partition of that drive.
And to find out which is the right “/dev/sdX” that you need as parameter for “smartctl”, enter a “parted -l” at the command line.