setting up and rebuild RAID

Hey there, Matt here.

Currently I’m running a RAID5 with 5x 3TB on a ASUS Crosshair V Formula-Z using its on-board fake-RAID controller. I already had a drive failure (rebuild went fine after replacement) and currently planning to set up a self-build NAS with opensuse. This time I’m plan a RAID6 (I can effort the additional drive for the additional redundancy - as RAID5 is ciritcal with 1 failed drive and will fail complete if another drive fails during rebuild). The fake-RAID is not compatible with linux - as linux shows the 5 drives not as raid but as single drives - only windows is able to use it as RAID when installed with RAID driver - hence why it’s called fake-RAID.

So, to see how to setup a soft-RAID on opensuse and to see what happens when a drive is replaced, I set up an VM with 1 system drive + 6 drives for RAID. I used YaST to add RAW partitions to the drives and set them as RAID so I could create a RAID6.
First “strange” behaviour I noticed when using mdadm --details /dev/md0 : it says rebuilding right after creation. I thought the partioner is responsible to set up the RAID complete - but instead it seems to just be initialized and then “create” the RAID by “re-building” it for the first time.
To simulate a failed drive I just created a new virtual drive - I choose 3rd drive by random. It’s shown as “removed”. So I started YaST to set the new drive as the others before and expected to somehow rebuild the RAID - not found.
I thought a simple restart may start the rebuild - but it didn’t.
Google led me to a guid from RedHat wich says, I have to remove to failed drive. I tried mdadm --manage --remove /dev/sdd1 - but got the error /dev/sdd1 seems not be part of the RAID.

So: after placing the disk - how to I initiate the rebuild?

Ok, I managed to figure it out:

sudo mdadm --manage /dev/md0 --add /dev/sdd1

did the trick. It also automatic initiated the rebuild.