Raid5 not starting up after reboot

Running OpenSuse 11.4 and have setup 2 x Raid5 configs - raid created, disks format, everything working fine. I’ve just rebooted and the raid5 fails to initialize.

Getting these errors:

May 29 16:58:30 suse kernel:  1788.170692] md: md0 stopped.
May 29 16:58:30 suse kernel:  1788.197864] md: invalid superblock checksum on sdb1
May 29 16:58:30 suse kernel:  1788.197876] md: sdb1 does not have a valid v0.90 superblock, not importing!
May 29 16:58:30 suse kernel:  1788.197903] md: md_import_device returned -22
May 29 16:58:30 suse kernel:  1788.201565] md: bind<sde2>
May 29 16:58:30 suse kernel:  1788.207588] md: invalid superblock checksum on sda2
May 29 16:58:30 suse kernel:  1788.207701] md: sda2 does not have a valid v0.90 superblock, not importing!
May 29 16:58:30 suse kernel:  1788.207715] md: md_import_device returned -22
May 29 16:58:30 suse kernel:  1788.292422] md: md1 stopped.
May 29 16:58:30 suse kernel:  1788.334886] md: bind<sde1>
May 29 16:58:30 suse kernel:  1788.336734] md: bind<sdf1>
May 29 16:58:30 suse kernel:  1788.337498] md: invalid superblock checksum on sda1
May 29 16:58:30 suse kernel:  1788.337508] md: sda1 does not have a valid v0.90 superblock, not importing!
May 29 16:58:30 suse kernel:  1788.337516] md: md_import_device returned -22
May 29 16:58:30 suse kernel:  1788.385477] bio: create slab <bio-1> at 1
May 29 16:58:30 suse kernel:  1788.385526] md/raid:md1: device sdf1 operational as raid disk 2
May 29 16:58:30 suse kernel:  1788.385532] md/raid:md1: device sde1 operational as raid disk 1
May 29 16:58:30 suse kernel:  1788.386242] md/raid:md1: allocated 3179kB
May 29 16:58:30 suse kernel:  1788.389609] md/raid:md1: raid level 5 active with 2 out of 3 devices, algorithm 2
May 29 16:58:30 suse kernel:  1788.389619] RAID conf printout:
May 29 16:58:30 suse kernel:  1788.389623]  --- level:5 rd:3 wd:2
May 29 16:58:30 suse kernel:  1788.389630]  disk 1, o:1, dev:sde1
May 29 16:58:30 suse kernel:  1788.389634]  disk 2, o:1, dev:sdf1
May 29 16:58:30 suse kernel:  1788.389719] md1: detected capacity change from 0 to 3000595644416
May 29 16:58:30 suse kernel:  1788.413562]  md1: unknown partition table

I have managed to get it working using something I found on one of the forums:

mdadm --assemble --verbose --update summaries /dev/md1 /dev/sda1 /dev/sde1 /dev/sdf1

mdadm --assemble --verbose --update summaries /dev/md0 /dev/sda2 /dev/sdb1 /dev/sde2

My mdadm.conf file:

DEVICE /dev/sda2 /dev/sdb1 /dev/sde2
ARRAY /dev/md0 UUID=8d0e9eaa:ab89d7f1:8b94c90b:72da1a08

DEVICE /dev/sda1 /dev/sde1 /dev/sdf1
ARRAY /dev/md1 UUID=f77c10b6:da833e9c:8b94c90b:72da1a08

mdadm --detail /dev/md0

/dev/md0:
        Version : 0.90
  Creation Time : Fri May 27 18:35:58 2011
     Raid Level : raid5
     Array Size : 976718848 (931.47 GiB 1000.16 GB)
  Used Dev Size : 488359424 (465.74 GiB 500.08 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun May 29 17:13:41 2011
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           UUID : 8d0e9eaa:ab89d7f1:8b94c90b:72da1a08 (local to host suse)
         Events : 0.21951

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       17        1      active sync   /dev/sdb1
       2       8       66        2      active sync   /dev/sde2

mdadm --detail /dev/md1

/dev/md1:
        Version : 0.90
  Creation Time : Fri May 27 19:04:57 2011
     Raid Level : raid5
     Array Size : 2930269184 (2794.52 GiB 3000.60 GB)
  Used Dev Size : 1465134592 (1397.26 GiB 1500.30 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sun May 29 17:13:49 2011
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           UUID : f77c10b6:da833e9c:8b94c90b:72da1a08 (local to host suse)
         Events : 0.53851

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       65        1      active sync   /dev/sde1
       2       8       81        2      active sync   /dev/sdf1

Any ideas? Its driving me nuts!

What kind of RAID is this to start? Software, Real Hardware, FAKE (BIOS Assisted)?

Its a software RAID.

How did you create the RAID? Via YaST or with the help of rescue CDs and command line programs?

Hi
Are they EARS Drives? Maybe you need to configure as 4K blocks/sectors first?

I used the command line to create them. mdadm --create

EARS drives? They’re Samsung SATA Drives.

Perhaps you should override the superblock version to 1.0 because 1.0 is what I got when I used YaST to create it instead of invoking mdadm --create manually.

How do I got about doing that?

-e, --metadata=, according to the mdadm man page. Or use YaST to create your array.

I’ve already created the array though. Will changing the metadata on a working array corrupt it in any way?

I don’t think you can change the metadata location non-destructively. You’ll have to start over.

In case you are avoiding YaST because you have only CLI, you do know that YaST will run in character cell graphics mode (ncurses)?

Thanks - dont think I"ll be starting over. I’ll just add the fixup scripts to the rc.local :slight_smile: