I guess this is my first post in the forum so hi everyone. I haven’t found this issue in my searches so here I go:
I am currently using 11.0 and am planning to upgrade to 11.1 at release time if possible. I’m not new to SUSE, it’s the distro I have been using since 9.0. I downloaded the last development release and the new partitioner GUI seems a little confusing besides it doesn’t properly detect my software RAID 1.
I have 3 HDs:
sda for OSs, 4 partitions
sdb for /home and swap
sdc for /home and swap
/dev/sdb1 and /dev/sdc1 are configured in a RAID1 array (/dev/md0). I have been using for several months now as my /home partition and it’s properly detected by the opensuse 11.0 installer but not for the 11.1 one.
When I look at the RAID section, I get /dev/md126 and /dev/md127 but none of those match my 400GB RAID1.
I tried to reproduce the situation in a virtual machine under VirtualBox (installing first 11.0 with soft RAID1 for /home and then installing 11.1 with no success.
Any ideas? Thanks in advance.
EDIT: My RAID1 is 440GB but that doesn’t really make a difference…
Boot from the DVD and hit Ctrl-Alt-F4 to see the kernel log; do you see mdadm loading and detecting an array? You can also do Ctrl-Alt-F9 (or F2), you’ll see a root prompt, do:
I did that and found nothing I could relate to mdadm. I tried the i586 and x64 versions with the same result. The wierd thing is that half of the times I try, the GUI will show a /dev/md0 besides 126 and 127, but /dev/md0 shows no member volumes, and all of its properties are empty.
I also tried using the text mode installer and, as I am spanish, tried booting the installer in spanish and english, but the result is the same. :\
What about this . . . switching to the Ctrl-Alt-F2 console, try mdadm query commands, like:
mdadm -D /dev/md0
mdamd -E /dev/sda1
And then the same on /dev/md126 and /dev/md127. I checked and the mdadm driver is loaded by the installation kernel, so the admin commands are available there.
Hope you are feeling better this morning. I’m under the weather myself . . .
Sorry I don’t have any suggestions other than going to kernel.org on the RAID channel and looking to see if there are any bugs or regressions reported. Does sound like it is the kernel. This could be a serious problem for a lot of users - including myself (I use 3 arrays). If you learn anything further, I know it would be appreciated if you could share that back here.
When I get the time I’ll burn a DVD with 11.1/RC1 and test its behavior on my arrays. I’ll report back what I find.
Update: I started an upgrade today using 11.1 RC1 to test how it would see my raid arrays. I only went as far as the step where the installer searches the disks and presents root choices to be mounted (I have 3 instances on this machine, only 1 of which uses the arrays). I actually expected there may be a problem because the root is on raid0, which unlike raid1 which is now embedded in the kernel, raid0 requires a kernel module in the initrd. The installer saw the array with no problem (although it did recommend to exit and change the mounting from device-name to a persistent alternative, so I’m using volume-label now).
But I saw no problem at all such as yours. And I have no idea why it is doing that on your machine. You might try a pm to @ken_yap; he has expertise in this area, or he may be able to suggest another member who can help. I’m very sorry I can’t be of any more help. Good luck.