RAID5 Array Fails to Automount after Reboot

Howdy yall, I need some help.

I’ve got a RAID5 array that doesn’t want to automount after rebooting.

I’m pretty familiar with linux, RAID, and mdadm, and up until now, I’ve had the RAID5 array working just fine. However, whenever I reboot, the array drops off and won’t remount until I manually assemble and then mount the thing.

I find this odd because I had everything automounting just fine back in 10.3, and even in 11.0 (I think - not sure on that). Currently, things are working, but I’d really like to not not have to type

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

followed by

mount /dev/md0 /mnt/data

every time I reboot. Even including this in some sort of start-up script seems kludgey…

Surely there must be a more elegant way of automatically bringing up a RAID5 array after booting?

I’m not sure what information you’ll need, so I’m going to go ahead and include as much as I can anticipate…

So having already used the commands:

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mount /dev/md0 /mnt/data

Here is the output of fdisk -l:

mediaserver:~ # fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000bdcd2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000277cc

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0008df0e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0004d61d

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sde: 122.9 GB, 122942324736 bytes
255 heads, 63 sectors/track, 14946 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbd1fbd1f

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *           2       14423   115844715   83  Linux
/dev/sde2           14424       14946     4200997+  82  Linux swap / Solaris

Disk /dev/md0: 3000.6 GB, 3000606130176 bytes
2 heads, 4 sectors/track, 732569856 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Here’s my fstab:

mediaserver:~ # cat /etc/fstab
/dev/disk/by-id/ata-Maxtor_6L120P0_L3DHH96H-part2 swap                 swap       defaults              0 0
/dev/disk/by-id/ata-Maxtor_6L120P0_L3DHH96H-part1 /                    ext3       acl,user_xattr        1 1
proc                 /proc                proc       defaults              0 0
sysfs                /sys                 sysfs      noauto                0 0
debugfs              /sys/kernel/debug    debugfs    noauto                0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0
/dev/md0             /mnt/data            xfs        auto                  0 0

Here’s my /etc/mdadm/mdadm.conf:

mediaserver:~ # cat /etc/mdadm/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=0.90 UUID=be8cc62f:22ec219d:c109596b:d7e29b7e

Here’s my --detail of /dev/md0:

mediaserver:~ # mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Wed Apr 22 22:17:49 2009
     Raid Level : raid5
     Array Size : 2930279424 (2794.53 GiB 3000.61 GB)
  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Apr 26 15:23:48 2009
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 256K

           UUID : be8cc62f:22ec219d:c109596b:d7e29b7e (local to host mediaserver)
         Events : 0.4

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1

I would appreciate any suggestions or pointers you might have… I feel like whatever I’m missing is something simple…

Thanks guys,

-Todd

The Software-RAID HOWTO: Tweaking, tuning and troubleshooting
See if you get any clues from here.
What are the entries in your /et/raidtab file?
Do you see any RAID related messages while booting?

I have similar problem with raid-5 array and 11.1… It was working months just fine and suddenly it stopped responding. I made a boot and now raid-5 array fails to activate. When I boot to rescue system, cat /proc/mdstat will show me that all drives on that array is marked as spare.

Is there anything I can do or is reinstall only solution?

type this command and post the output:

mdadm --examine --scan

then type

cat /etc/mdadm.conf

if there are no entries in mdadm.conf for your array you can do this:

mdadm --examine --scan >> /etc/mdadm.conf

Thanks for the response!

I checked out that link you posted, and unfortunately, I’ve read that document before, and it wasn’t much help. One thing I will note is that I hadn’t created an /etc/raidtab file, but I was under the impression that wasn’t necessary.

But just to make sure, I went ahead and created /etc/raidtab and copy-pasted the contents of the file found in your link, making any modifications as they applied to my system. I rebooted, and the thing still doesn’t work.

I checked /var/log/messages and dmesg, and there is no mention of mdadm or md0 in either log file (I ran a search for those two strings, which came up empty).

How can I get this RAID array to auto-assemble and then automount at boot?

I have a RAID1 array it automounts fine. But I haven’t traced how it gets triggered during boot. Here’s a section of the boot.msg file:

<6>md: raid0 personality registered for level 0
<6>xor: automatically using best checksumming function: generic_sse
<6> generic_sse: 7749.000 MB/sec
<6>xor: using function: generic_sse (7749.000 MB/sec)
<6>async_tx: api initialized (async)
<4>raid6: int64x1 2266 MB/s
<4>raid6: int64x2 2669 MB/s
<4>raid6: int64x4 2340 MB/s
<4>raid6: int64x8 2129 MB/s
<4>raid6: sse2x1 1079 MB/s
<4>raid6: sse2x2 2057 MB/s
<4>raid6: sse2x4 2790 MB/s
<4>raid6: using algorithm sse2x4 (2790 MB/s)
<6>md: raid6 personality registered for level 6
<6>md: raid5 personality registered for level 5
<6>md: raid4 personality registered for level 4
<6>md: md0 stopped.
<6>md: bind<sdb2>
<6>md: bind<sda2>
<6>raid1: raid set md0 active with 2 out of 2 mirrors
<6>md0: bitmap initialized from disk: read 13/13 pages, set 0 bits
<6>created bitmap (193 pages) for device md0

Perhaps you need to look at the modules loaded at and also after booting. I notice I have raid1 included. Also work out where if so, the boot.md script gets run. Good luck.

Alright, well, since time’s running out, I’m going to post one possible solution, but let me tell you right now: this is an ugly horrible hack, and is not BY ANY MEANS very elegant. At all. But I’ve got more important things to do than to spend hours troubleshooting this, so let me just post what I’ve got.

0: Become root.
1:

touch /root/raidautomount.sh

2:

mkdir /mnt/data

3:

vim /root/raidautomount.sh

4: Insert:

/sbin/mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mount /dev/md0 /mnt/data

5: Write, then Quit.
6:

chmod +x /root/raidautomount.sh

7:

crontab -e

8: Insert:

@reboot /root/raidautomount.sh

9: Write, then Quit.
10: reboot -n

There ya go. You should now have /dev/md0 mounted at /mnt/data the next time your system comes back up. Remember, I’ve already gotten /dev/md0 properly formatted with a filesystem, and performed all of the other necessary preparation to get to this point. This hack is only meant to save me from the hassle of logging into my server as root and type the assemble and mount commands every time I restart the ****ed thing. Furthermore, remember /mnt/data will be owned by root, and won’t be readable or writable by anyone else until you either change ownership or change the permissions on the directory, but that should be trivial - chmod and chown will be your friends. Google them or use man to read up on how they work.

It bothers me that this isn’t what I would call an elegant solution, but since I have no idea why /dev/md0 isn’t auto-assembled then auto-mounted, and since I’ve already wasted a day or two trying to figure this out and need to move on to other things, this is the best I can come up with at the moment.

If anyone figures out what I’m missing, what checkbox needs ticking, or what configuration file needs modifying, for the sake of my sanity and my slightly OCD-like tendency to have a proper and elegant solution, will you post? Judging by these posts and my other posts on other forums, I’m not the only one who’s come across this problem.

Thanks guys,
-Todd

Do still have the same problem? I assume that your problem is due to the fact that you build the array with the command:

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

and not with the command

mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

On boot up, are the RAID modules loaded to see RAID5 and the md0 device?

If not that suggests it’s the boot up process optimisations that probably have caused it.

I also have this problem, although my situation is slightly more complicated because I have LVM on the RAID5 - I cannot get the thing to assemble automatically at boot.

mdadm is reading its config files just fine, mdadm --assemble /dev/md0 works as you would expect once the system is up, but I have to comment the RAID LVs out of /etc/fstab at every reboot and run a script to assemble the array, get LVM to rescan the PVs and bring the LVs back online before I can mount them.

This is not cool. How is it supposed to work? There must be a script or a service somewhere which is supposed to assemble the array during boot, surely?

Figured it out - for people who pull this back from a search.

YaST does not show the boot.md service in the “simple” runlevel view, and it is disabled by default on OpenSuSE 11.2 - the mdadm monitoring service gets started, but the MDADM arrays do not. It is required to assemble your arrays during boot, you can either turn it on through the expert view in YaST or just do this from the root shell.

chkconfig boot.md on

You should be able to mark those file systems noauto, and then use an /etc/init.d script to automate start/stop of the raid array.

It’s a definite bug in 11.2-GM, RAID & LVM stuff did work for / in 11.1.

There’s some bugs about dmraid, as well as mdraid & LVM in Bugzilla.

Take a look at https://bugzilla.novell.com/buglist.cgi?short_desc=RAID&classification=openSUSE&query_format=advanced&bug_status=NEW&bug_status=ASSIGNED&bug_status=NEEDINFO&bug_status=REOPENED&short_desc_type=allwordssubstr&product=openSUSE%2011.2