Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: Raid5 not starting up after reboot

  1. #1

    Default Raid5 not starting up after reboot

    Running OpenSuse 11.4 and have setup 2 x Raid5 configs - raid created, disks format, everything working fine. I've just rebooted and the raid5 fails to initialize.

    Getting these errors:
    Code:
    May 29 16:58:30 suse kernel: [ 1788.170692] md: md0 stopped.
    May 29 16:58:30 suse kernel: [ 1788.197864] md: invalid superblock checksum on sdb1
    May 29 16:58:30 suse kernel: [ 1788.197876] md: sdb1 does not have a valid v0.90 superblock, not importing!
    May 29 16:58:30 suse kernel: [ 1788.197903] md: md_import_device returned -22
    May 29 16:58:30 suse kernel: [ 1788.201565] md: bind<sde2>
    May 29 16:58:30 suse kernel: [ 1788.207588] md: invalid superblock checksum on sda2
    May 29 16:58:30 suse kernel: [ 1788.207701] md: sda2 does not have a valid v0.90 superblock, not importing!
    May 29 16:58:30 suse kernel: [ 1788.207715] md: md_import_device returned -22
    May 29 16:58:30 suse kernel: [ 1788.292422] md: md1 stopped.
    May 29 16:58:30 suse kernel: [ 1788.334886] md: bind<sde1>
    May 29 16:58:30 suse kernel: [ 1788.336734] md: bind<sdf1>
    May 29 16:58:30 suse kernel: [ 1788.337498] md: invalid superblock checksum on sda1
    May 29 16:58:30 suse kernel: [ 1788.337508] md: sda1 does not have a valid v0.90 superblock, not importing!
    May 29 16:58:30 suse kernel: [ 1788.337516] md: md_import_device returned -22
    May 29 16:58:30 suse kernel: [ 1788.385477] bio: create slab <bio-1> at 1
    May 29 16:58:30 suse kernel: [ 1788.385526] md/raid:md1: device sdf1 operational as raid disk 2
    May 29 16:58:30 suse kernel: [ 1788.385532] md/raid:md1: device sde1 operational as raid disk 1
    May 29 16:58:30 suse kernel: [ 1788.386242] md/raid:md1: allocated 3179kB
    May 29 16:58:30 suse kernel: [ 1788.389609] md/raid:md1: raid level 5 active with 2 out of 3 devices, algorithm 2
    May 29 16:58:30 suse kernel: [ 1788.389619] RAID conf printout:
    May 29 16:58:30 suse kernel: [ 1788.389623]  --- level:5 rd:3 wd:2
    May 29 16:58:30 suse kernel: [ 1788.389630]  disk 1, o:1, dev:sde1
    May 29 16:58:30 suse kernel: [ 1788.389634]  disk 2, o:1, dev:sdf1
    May 29 16:58:30 suse kernel: [ 1788.389719] md1: detected capacity change from 0 to 3000595644416
    May 29 16:58:30 suse kernel: [ 1788.413562]  md1: unknown partition table
    I have managed to get it working using something I found on one of the forums:

    Code:
    mdadm --assemble --verbose --update summaries /dev/md1 /dev/sda1 /dev/sde1 /dev/sdf1
    
    mdadm --assemble --verbose --update summaries /dev/md0 /dev/sda2 /dev/sdb1 /dev/sde2
    My mdadm.conf file:

    Code:
    DEVICE /dev/sda2 /dev/sdb1 /dev/sde2
    ARRAY /dev/md0 UUID=8d0e9eaa:ab89d7f1:8b94c90b:72da1a08
    
    DEVICE /dev/sda1 /dev/sde1 /dev/sdf1
    ARRAY /dev/md1 UUID=f77c10b6:da833e9c:8b94c90b:72da1a08
    mdadm --detail /dev/md0
    Code:
    /dev/md0:
            Version : 0.90
      Creation Time : Fri May 27 18:35:58 2011
         Raid Level : raid5
         Array Size : 976718848 (931.47 GiB 1000.16 GB)
      Used Dev Size : 488359424 (465.74 GiB 500.08 GB)
       Raid Devices : 3
      Total Devices : 3
    Preferred Minor : 0
        Persistence : Superblock is persistent
    
        Update Time : Sun May 29 17:13:41 2011
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 512K
    
               UUID : 8d0e9eaa:ab89d7f1:8b94c90b:72da1a08 (local to host suse)
             Events : 0.21951
    
        Number   Major   Minor   RaidDevice State
           0       8        2        0      active sync   /dev/sda2
           1       8       17        1      active sync   /dev/sdb1
           2       8       66        2      active sync   /dev/sde2
    mdadm --detail /dev/md1

    Code:
    /dev/md1:
            Version : 0.90
      Creation Time : Fri May 27 19:04:57 2011
         Raid Level : raid5
         Array Size : 2930269184 (2794.52 GiB 3000.60 GB)
      Used Dev Size : 1465134592 (1397.26 GiB 1500.30 GB)
       Raid Devices : 3
      Total Devices : 3
    Preferred Minor : 1
        Persistence : Superblock is persistent
    
        Update Time : Sun May 29 17:13:49 2011
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 512K
    
               UUID : f77c10b6:da833e9c:8b94c90b:72da1a08 (local to host suse)
             Events : 0.53851
    
        Number   Major   Minor   RaidDevice State
           0       8        1        0      active sync   /dev/sda1
           1       8       65        1      active sync   /dev/sde1
           2       8       81        2      active sync   /dev/sdf1
    Any ideas? Its driving me nuts!

  2. #2
    Join Date
    Nov 2009
    Location
    West Virginia Sector 13
    Posts
    16,287

    Default Re: Raid5 not starting up after reboot

    What kind of RAID is this to start? Software, Real Hardware, FAKE (BIOS Assisted)?

  3. #3

    Default Re: Raid5 not starting up after reboot

    Its a software RAID.

  4. #4
    Join Date
    Jun 2008
    Location
    UTC+10
    Posts
    9,683
    Blog Entries
    4

    Default Re: Raid5 not starting up after reboot

    How did you create the RAID? Via YaST or with the help of rescue CDs and command line programs?

  5. #5
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    32,336
    Blog Entries
    15

    Default Re: Raid5 not starting up after reboot

    Hi
    Are they EARS Drives? Maybe you need to configure as 4K blocks/sectors first?
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  6. #6

    Default Re: Raid5 not starting up after reboot

    I used the command line to create them. mdadm --create

    EARS drives? They're Samsung SATA Drives.

  7. #7
    Join Date
    Jun 2008
    Location
    UTC+10
    Posts
    9,683
    Blog Entries
    4

    Default Re: Raid5 not starting up after reboot

    Perhaps you should override the superblock version to 1.0 because 1.0 is what I got when I used YaST to create it instead of invoking mdadm --create manually.

  8. #8

    Default Re: Raid5 not starting up after reboot

    Quote Originally Posted by ken_yap View Post
    Perhaps you should override the superblock version to 1.0 because 1.0 is what I got when I used YaST to create it instead of invoking mdadm --create manually.
    How do I got about doing that?

  9. #9
    Join Date
    Jun 2008
    Location
    UTC+10
    Posts
    9,683
    Blog Entries
    4

    Default Re: Raid5 not starting up after reboot

    -e, --metadata=, according to the mdadm man page. Or use YaST to create your array.

  10. #10

    Default Re: Raid5 not starting up after reboot

    Quote Originally Posted by ken_yap View Post
    -e, --metadata=, according to the mdadm man page. Or use YaST to create your array.

    I've already created the array though. Will changing the metadata on a working array corrupt it in any way?

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •