Cannot create SW RAID??

This is the second time this has happened [on different machines], and it’s a big problem as it prevents us from updating/rebuilding machines:

  • When rebuilding a machine we would like to convert the HW RAID to SW RAID so it can be monitored via /proc/mdstat;

  • During the installation process, it is proving impossible to create a new SW RAID5 or RAID6 array (RAID0 for swap does seem to work OK, so it may be related to the structure chosen?). After configuring the SW RAID, installation fails because the partitions selected for RAID5 are “BUSY”!!

We have tried:

a) Installing blank GPT tables during installation (drop to shell and return);
b) Installing blank GPT tables via Rescue System;
b) Installing blank GPT partition table on each disk via System Rescue [distro];
c) Installing blank DOS partition table via Rescue System;
d) Installing a blank GPT partition table via System Rescue;
e) Formatting the disks with NTFS;

NONE of these options allowed the Installer to build the SW RAID! The VERY FIRST install actually completed, but the system did not boot.

Are there any recommendations posted for answers to questions like:

  1. What is the best partition table and boot configuration for SW RAID?

  2. Why does the installer think that partitions are “busy” when the target partition table is empty?

    TIA!

    Lee

On Tue 11 Mar 2014 09:56:01 PM CDT, omnitec wrote:

This is the second time this has happened [on different machines], and
it’s a big problem as it prevents us from updating/rebuilding machines:

  • When rebuilding a machine we would like to convert the HW RAID to SW
    RAID so it can be monitored via /proc/mdstat;

Hi
If your using the HW controller, your deleting the RAID setup and then
setting as a JBOD setup?


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.2 Kernel 3.11.10-7-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

[QUOTE=malcolmlewis;2629995]Hi
If your using the HW controller, your deleting the RAID setup and then
setting as a JBOD setup?

In both cases, the RAID configuration was removed, … else the installer would not see individual drives. On this system there were three drives, … I was attempting to create three RAID5 configurations. The first install did complete, but would not boot; subsequent attempts all ended with ‘partition busy’.

I briefly tested in QEMU and I can create RAID5 just fine during installation, at least when starting with blank disks

  1. Why does the installer think that partitions are “busy” when the target partition table is empty?

How should we know? When you get “busy” error switch to terminal (tty2) and examine your system. You are the only one who can answer this question.

I had similar error when I first attempted to add third disk and install on a VM that had previous Linux MD setup which I used to experiment. Installer did not correctly destroyed existing RAID arrays so during partitioning step it ended with an error. You may be using fake raid which automatically creates Linux MD container on disks or you may forget to remove something. Again - you are the only one having possibility to actually investigate it.

Sure, … it might work fine with virgin disks, but that isn’t the question, is it?

If I could answer the question, I would not have posted here, eh? Would you have any information to contribute to a possible solution? Why would a partition be “busy” when it has a new, blank partition table created in different ways?

Bingo - that is the question which needs data - how is the partition table cleared properly when trying to create a new RAID array? Inquiring minds want to know!!!

On Wed 12 Mar 2014 04:16:01 AM CDT, omnitec wrote:

arvidjaar;2630006 Wrote:
> I briefly tested in QEMU and I can create RAID5 just fine during
> installation, at least when starting with blank disks.

Sure, … it might work fine with virgin disks, but that isn’t the
question, is it?

arvidjaar;2630006 Wrote:
> How should we know? When you get “busy” error switch to terminal
> (tty2) and examine your system. You are the only one who can answer
> this question.

If I could answer the question, I would not have posted here, eh? Would
you have any information to contribute to a possible solution? Why would
a partition be “busy” when it has a new, blank partition table created
in different ways?

arvidjaar;2630006 Wrote:
> I had similar error when I first attempted to add third disk and
> install on a VM that had previous Linux MD setup which I used to
> experiment. Installer did not correctly destroyed existing RAID
> arrays so during partitioning step it ended with an error.

Bingo - that is the question which needs data - how is the partition
table cleared properly when trying to create a new RAID array? Inquiring
minds want to know!!!

Hi
So if using gpt disks your booting via uefi, or booting from a
different disk via a mbr?

If using gpt disks I usually boot from the openSUSE 13.1 rescue cd and
use gdisk to wipe the gpt and zero out the mbr ( that x and z to zap)
exit, re run gdisk and create the partition and set the type (fd00).


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.2 Kernel 3.11.10-7-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

With information you provided - no. You did not event tell what hardware (specifically, RAID controller) you have. Nor did you provide exact error (screenshot at this point would be helpful).

When you get this error, switch to terminal. Check /proc/mdstat whether you possibly have some MD running. Check “dmsetup ls”, may be dmraid kicked in. Check /sys/block/sdX/sdXN/holders - they contain pointers to layered devices (if any).

Thanks for the suggestion, ,… I finally figured out that part of the problem - the installer WOULD NOT WRITE an MSDOS partition table to a drive with a GPT partition table! Looks like the problem could be fdisk itself - it would not write manually - had to clear them with parted.

The system is now installed, … but it will not boot. Using a 500MB clean partition on the first drive for /boot, … but neither “/boot” nor “MBR” on the boot installation configuration will work. system was happily running 12.1 for many months, … any pointers on how to isolate this part of the problem?

Thanks!!

fdisk does not really handle GPT formatted disks use parted or gdisk

You are switching the type of RAID that is generally not done you need to back any data you want and wipe the disks very clean (ie zero out entire disk) and then install. RAID can leave little pieces of configuration stuff in odd places like the last sector or first track.

You did not say what kind of hardware RAID you had. Did you remove the RAID controller or kill it from the BIOS?

Finally figured it out and I wanted to let folks know that the issues were:

  1. Found out the fdisk CANNOT write an MSDOS partition to a disk with a GPT table! This caused most of the heartache, … and a lot of wasted hours. Doubt that it’s a SuSE bug, but hopefully folks will be able to search and find this note.

  2. The other major issue - a HW RAID system may not have a single disk in the BIOS boot options, … final problem found, solved, success.

Thanks for the assistance!

well of course it will not write MBR to a GPT disk, this is a competing technology.
you would use either GPT or MBR.

if you use GPT the steps to make a bootable RAID (I am not sure how it works with Raid-5, I used Raid-1 but I assume it the same) are :
I assume you want to have a 3 drive Raid-5 config

here is my test VM with Raid6 bootable system disk

md0 RAID-6 is the “/”
md1 RAID5 is “swap”
I do not have separate “/home” it is all on “/”
and I just did the raided swap just for fun.
the system is bootable however I am not sure what would happen if I remove one of the disks
will try and let you know


linux-cc2l:/ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active (auto-read-only) raid5 sdd2[3] sdc2[1] sdb2[0]
      2103040 blocks super 1.0 level 5, 128k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md0 : active raid6 sdb3[1] sdc3[2] sda3[0] sdd3[3]
      12566272 blocks super 1.0 level 6, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
linux-cc2l:/ # 

steps to reproduce it:

on install, when asked partition setup choose edit.
clear all preselected config. on all disks you want to raid.

#1. for each disk you want to use in a raid do new partition table.
I used GPT but you can use MBR
#2. starting from the first disk (anyone will do I just find it it easyer to do it in order)
in my case I had 4 disks /sda /sdb /sdc /sdd
so on /sda create partition layout you want to have. but if you use GPT, very first partition
MUST be"/boot" not mounted / not raided, very important as you can nto boot from raided /boot partition. so click “Add” make size 1GB (this is max you will ever need and is recommended minimum. I do not remember where I saw it though so don’t hold me to it.)
select do not mount and do not format and choose “GRAB BOOT” as type for regular system and GRAB UEFI if you have UEFI bios.

done.

Create partition for swap next.
Create partition for “/”
and if needed for “/home”

!!! DO NOT MOUNT ANY OF THE NEW PARTITIONS !!! just create new empty space for them
choose do not format and do not mount
choose wanted files system

you should end up with disk layout like

/sda
…sda1 1GB BIOS GRAB /OR/ UEFI GRAB
…sda2 whatEverSwapSizeIs swap
…sda3 whatEverRootSizeIs AnyLinuxFsTYPE for “/” partition
…sda4 whatEverhomeSizeIs AnyLinuxFsTYPE for “/home” partition

#3 Clone disk /sda to all other drives

#4 setup RAID using matching partitions from each drive
in example above first partition on each disk is for /boot so it is not used in any raid
second partition is for “swap” so it is used in MD1
third partition is for “/” so it is used in MD0 :
/sda3
/sdb3
/sdc3

#5 at the setup review screen choose edit boot and make sure you select boot from option to boot from grub boot partition. if you use GPT and from MBR if you use MBR
also if you use MBR you might not need the “GRUB BOOT” partition at all.

#6 finish install. and the system should boot just fine.

Actually you can, but YaST (and perl-Bootloader) does not support it. I just installed VM with / on RAID5 without separate /boot. I got warning on summary screen that system will be unbootable, but it booted just fine … :slight_smile:

did you try puling one of the drives out and booting :slight_smile:
my raid6 want boot if I pull the first drive out.
any other drive are ok but first one brings system down

openSUSE offers limited support for installing bootloader on multiple drives (basically only RAID1 is supported). You need to do it manually (unfortunately, you will need to repeat it every time bootloader is updated).

It is better if you start new thread for this problem.

thanks , but at the moment it is not my issue.
I was playing with it before, and again to check something so I can bew confident with my response for OP bit I do not build my machines with Raided system drive. not yet anyway. I did try with Raid 1 as well with no success.

Yes, in 13.1 installer puts bootloader on one drive only. This should be fixed in factory.